00:00:00.000 Started by upstream project "autotest-per-patch" build number 132551 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.114 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.560 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.574 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.589 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.589 > git config core.sparsecheckout # timeout=10 00:00:04.602 > git read-tree -mu HEAD # timeout=10 00:00:04.619 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.641 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.641 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.723 [Pipeline] Start of Pipeline 00:00:04.736 [Pipeline] library 00:00:04.738 Loading library shm_lib@master 00:00:04.738 Library shm_lib@master is cached. Copying from home. 00:00:04.752 [Pipeline] node 00:00:04.760 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.761 [Pipeline] { 00:00:04.769 [Pipeline] catchError 00:00:04.771 [Pipeline] { 00:00:04.783 [Pipeline] wrap 00:00:04.791 [Pipeline] { 00:00:04.796 [Pipeline] stage 00:00:04.798 [Pipeline] { (Prologue) 00:00:04.989 [Pipeline] sh 00:00:05.270 + logger -p user.info -t JENKINS-CI 00:00:05.289 [Pipeline] echo 00:00:05.291 Node: GP11 00:00:05.298 [Pipeline] sh 00:00:05.597 [Pipeline] setCustomBuildProperty 00:00:05.608 [Pipeline] echo 00:00:05.610 Cleanup processes 00:00:05.615 [Pipeline] sh 00:00:05.898 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.898 3770174 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.910 [Pipeline] sh 00:00:06.198 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.199 ++ grep -v 'sudo pgrep' 00:00:06.199 ++ awk '{print $1}' 00:00:06.199 + sudo kill -9 00:00:06.199 + true 00:00:06.214 [Pipeline] cleanWs 00:00:06.224 [WS-CLEANUP] Deleting project workspace... 00:00:06.224 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.232 [WS-CLEANUP] done 00:00:06.236 [Pipeline] setCustomBuildProperty 00:00:06.249 [Pipeline] sh 00:00:06.527 + sudo git config --global --replace-all safe.directory '*' 00:00:06.612 [Pipeline] httpRequest 00:00:07.003 [Pipeline] echo 00:00:07.005 Sorcerer 10.211.164.20 is alive 00:00:07.016 [Pipeline] retry 00:00:07.017 [Pipeline] { 00:00:07.030 [Pipeline] httpRequest 00:00:07.034 HttpMethod: GET 00:00:07.035 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.035 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.051 Response Code: HTTP/1.1 200 OK 00:00:07.051 Success: Status code 200 is in the accepted range: 200,404 00:00:07.051 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.677 [Pipeline] } 00:00:15.693 [Pipeline] // retry 00:00:15.699 [Pipeline] sh 00:00:15.981 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.995 [Pipeline] httpRequest 00:00:16.572 [Pipeline] echo 00:00:16.574 Sorcerer 10.211.164.20 is alive 00:00:16.583 [Pipeline] retry 00:00:16.584 [Pipeline] { 00:00:16.596 [Pipeline] httpRequest 00:00:16.600 HttpMethod: GET 00:00:16.600 URL: http://10.211.164.20/packages/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:16.600 Sending request to url: http://10.211.164.20/packages/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:00:16.606 Response Code: HTTP/1.1 200 OK 00:00:16.607 Success: Status code 200 is in the accepted range: 200,404 00:00:16.607 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:01:23.366 [Pipeline] } 00:01:23.383 [Pipeline] // retry 00:01:23.391 [Pipeline] sh 00:01:23.680 + tar --no-same-owner -xf spdk_e43b3b914a2f081051aba39c73d952a3fadefbbe.tar.gz 00:01:26.995 [Pipeline] sh 00:01:27.283 + git -C spdk log --oneline -n5 00:01:27.283 e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:27.283 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:27.283 22fe262e0 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:27.283 3c6c4e019 bdev: Factor out checking bounce buffer necessity into helper function 00:01:27.283 0836dccda bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:27.295 [Pipeline] } 00:01:27.309 [Pipeline] // stage 00:01:27.318 [Pipeline] stage 00:01:27.320 [Pipeline] { (Prepare) 00:01:27.337 [Pipeline] writeFile 00:01:27.353 [Pipeline] sh 00:01:27.639 + logger -p user.info -t JENKINS-CI 00:01:27.654 [Pipeline] sh 00:01:27.942 + logger -p user.info -t JENKINS-CI 00:01:27.956 [Pipeline] sh 00:01:28.244 + cat autorun-spdk.conf 00:01:28.244 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.244 SPDK_TEST_NVMF=1 00:01:28.244 SPDK_TEST_NVME_CLI=1 00:01:28.244 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.244 SPDK_TEST_NVMF_NICS=e810 00:01:28.244 SPDK_TEST_VFIOUSER=1 00:01:28.244 SPDK_RUN_UBSAN=1 00:01:28.244 NET_TYPE=phy 00:01:28.253 RUN_NIGHTLY=0 00:01:28.257 [Pipeline] readFile 00:01:28.283 [Pipeline] withEnv 00:01:28.285 [Pipeline] { 00:01:28.298 [Pipeline] sh 00:01:28.588 + set -ex 00:01:28.588 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:28.588 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.588 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.588 ++ SPDK_TEST_NVMF=1 00:01:28.588 ++ SPDK_TEST_NVME_CLI=1 00:01:28.588 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.588 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.588 ++ SPDK_TEST_VFIOUSER=1 00:01:28.588 ++ SPDK_RUN_UBSAN=1 00:01:28.588 ++ NET_TYPE=phy 00:01:28.588 ++ RUN_NIGHTLY=0 00:01:28.588 + case $SPDK_TEST_NVMF_NICS in 00:01:28.588 + DRIVERS=ice 00:01:28.588 + [[ tcp == \r\d\m\a ]] 00:01:28.588 + [[ -n ice ]] 00:01:28.588 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.588 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.588 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:28.588 rmmod: ERROR: Module irdma is not currently loaded 00:01:28.588 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.588 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.588 + true 00:01:28.588 + for D in $DRIVERS 00:01:28.588 + sudo modprobe ice 00:01:28.588 + exit 0 00:01:28.598 [Pipeline] } 00:01:28.612 [Pipeline] // withEnv 00:01:28.617 [Pipeline] } 00:01:28.628 [Pipeline] // stage 00:01:28.634 [Pipeline] catchError 00:01:28.635 [Pipeline] { 00:01:28.644 [Pipeline] timeout 00:01:28.644 Timeout set to expire in 1 hr 0 min 00:01:28.646 [Pipeline] { 00:01:28.659 [Pipeline] stage 00:01:28.661 [Pipeline] { (Tests) 00:01:28.672 [Pipeline] sh 00:01:28.959 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.959 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.959 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.959 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.959 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.959 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.959 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.959 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.959 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.959 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.959 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.959 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.959 + source /etc/os-release 00:01:28.959 ++ NAME='Fedora Linux' 00:01:28.959 ++ VERSION='39 (Cloud Edition)' 00:01:28.959 ++ ID=fedora 00:01:28.959 ++ VERSION_ID=39 00:01:28.959 ++ VERSION_CODENAME= 00:01:28.959 ++ PLATFORM_ID=platform:f39 00:01:28.959 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.959 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.959 ++ LOGO=fedora-logo-icon 00:01:28.959 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.959 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.959 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.959 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.959 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.959 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.959 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.959 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.959 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.959 ++ SUPPORT_END=2024-11-12 00:01:28.959 ++ VARIANT='Cloud Edition' 00:01:28.959 ++ VARIANT_ID=cloud 00:01:28.959 + uname -a 00:01:28.959 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:28.959 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:29.897 Hugepages 00:01:29.897 node hugesize free / total 00:01:29.897 node0 1048576kB 0 / 0 00:01:29.897 node0 2048kB 0 / 0 00:01:29.897 node1 1048576kB 0 / 0 00:01:29.897 node1 2048kB 0 / 0 00:01:29.897 00:01:29.897 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.897 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:29.897 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:29.897 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:29.897 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:29.897 + rm -f /tmp/spdk-ld-path 00:01:29.897 + source autorun-spdk.conf 00:01:29.897 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.897 ++ SPDK_TEST_NVMF=1 00:01:29.897 ++ SPDK_TEST_NVME_CLI=1 00:01:29.897 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.897 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.897 ++ SPDK_TEST_VFIOUSER=1 00:01:29.897 ++ SPDK_RUN_UBSAN=1 00:01:29.897 ++ NET_TYPE=phy 00:01:29.897 ++ RUN_NIGHTLY=0 00:01:29.897 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.897 + [[ -n '' ]] 00:01:29.897 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.897 + for M in /var/spdk/build-*-manifest.txt 00:01:29.897 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:29.898 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.898 + for M in /var/spdk/build-*-manifest.txt 00:01:29.898 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.898 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.898 + for M in /var/spdk/build-*-manifest.txt 00:01:29.898 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.898 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.898 ++ uname 00:01:29.898 + [[ Linux == \L\i\n\u\x ]] 00:01:29.898 + sudo dmesg -T 00:01:30.157 + sudo dmesg --clear 00:01:30.157 + dmesg_pid=3771476 00:01:30.157 + [[ Fedora Linux == FreeBSD ]] 00:01:30.157 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.157 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.157 + sudo dmesg -Tw 00:01:30.157 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.157 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.157 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.157 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.157 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.157 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.157 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.157 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.157 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.157 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.157 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.157 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.157 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.157 20:42:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:30.157 20:42:20 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:30.157 20:42:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:30.157 20:42:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:30.157 20:42:20 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.157 20:42:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:30.157 20:42:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:30.157 20:42:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:30.157 20:42:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.157 20:42:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.157 20:42:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.157 20:42:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.157 20:42:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.157 20:42:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.157 20:42:20 -- paths/export.sh@5 -- $ export PATH 00:01:30.157 20:42:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.157 20:42:20 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:30.157 20:42:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:30.157 20:42:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732650140.XXXXXX 00:01:30.157 20:42:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732650140.enWeKJ 00:01:30.157 20:42:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:30.157 20:42:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:30.157 20:42:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:30.157 20:42:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.157 20:42:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.157 20:42:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:30.157 20:42:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:30.157 20:42:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.157 20:42:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:30.157 20:42:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:30.157 20:42:20 -- pm/common@17 -- $ local monitor 00:01:30.157 20:42:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.157 20:42:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.157 20:42:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.157 20:42:20 -- pm/common@21 -- $ date +%s 00:01:30.157 20:42:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.157 20:42:20 -- pm/common@21 -- $ date +%s 00:01:30.157 20:42:20 -- pm/common@25 -- $ sleep 1 00:01:30.157 20:42:20 -- pm/common@21 -- $ date +%s 00:01:30.157 20:42:20 -- pm/common@21 -- $ date +%s 00:01:30.157 20:42:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732650140 00:01:30.157 20:42:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732650140 00:01:30.157 20:42:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732650140 00:01:30.157 20:42:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732650140 00:01:30.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732650140_collect-cpu-load.pm.log 00:01:30.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732650140_collect-vmstat.pm.log 00:01:30.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732650140_collect-cpu-temp.pm.log 00:01:30.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732650140_collect-bmc-pm.bmc.pm.log 00:01:31.094 20:42:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:31.094 20:42:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.094 20:42:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.094 20:42:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.094 20:42:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.094 Tue Nov 26 07:42:21 PM UTC 2024 00:01:31.094 20:42:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.094 v25.01-pre-249-ge43b3b914 00:01:31.094 20:42:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.094 20:42:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.094 20:42:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.094 20:42:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:31.094 20:42:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:31.094 20:42:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.094 ************************************ 00:01:31.094 START TEST ubsan 00:01:31.094 ************************************ 00:01:31.094 20:42:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:31.094 using ubsan 00:01:31.094 00:01:31.094 real 0m0.000s 00:01:31.094 user 0m0.000s 00:01:31.094 sys 0m0.000s 00:01:31.094 20:42:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:31.094 20:42:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.095 ************************************ 00:01:31.095 END TEST ubsan 00:01:31.095 ************************************ 00:01:31.352 20:42:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.352 20:42:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.352 20:42:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.352 20:42:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:31.352 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:31.352 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.618 Using 'verbs' RDMA provider 00:01:42.180 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:52.159 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.159 Creating mk/config.mk...done. 00:01:52.159 Creating mk/cc.flags.mk...done. 00:01:52.159 Type 'make' to build. 00:01:52.159 20:42:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:52.159 20:42:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.159 20:42:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.159 20:42:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.159 ************************************ 00:01:52.159 START TEST make 00:01:52.159 ************************************ 00:01:52.159 20:42:42 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:52.419 make[1]: Nothing to be done for 'all'. 00:01:54.341 The Meson build system 00:01:54.341 Version: 1.5.0 00:01:54.341 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:54.341 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.341 Build type: native build 00:01:54.341 Project name: libvfio-user 00:01:54.341 Project version: 0.0.1 00:01:54.341 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.341 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.341 Host machine cpu family: x86_64 00:01:54.341 Host machine cpu: x86_64 00:01:54.341 Run-time dependency threads found: YES 00:01:54.341 Library dl found: YES 00:01:54.341 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.341 Run-time dependency json-c found: YES 0.17 00:01:54.341 Run-time dependency cmocka found: YES 1.1.7 00:01:54.341 Program pytest-3 found: NO 00:01:54.341 Program flake8 found: NO 00:01:54.341 Program misspell-fixer found: NO 00:01:54.341 Program restructuredtext-lint found: NO 00:01:54.341 Program valgrind found: YES (/usr/bin/valgrind) 00:01:54.341 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.341 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.341 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.341 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.341 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:54.341 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:54.341 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.341 Build targets in project: 8 00:01:54.341 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:54.341 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:54.341 00:01:54.341 libvfio-user 0.0.1 00:01:54.341 00:01:54.341 User defined options 00:01:54.341 buildtype : debug 00:01:54.341 default_library: shared 00:01:54.341 libdir : /usr/local/lib 00:01:54.341 00:01:54.341 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.916 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:55.180 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:55.180 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:55.180 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:55.180 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:55.180 [5/37] Compiling C object samples/null.p/null.c.o 00:01:55.180 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:55.180 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:55.180 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:55.180 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:55.180 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:55.180 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:55.180 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:55.180 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:55.447 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:55.447 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:55.447 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:55.448 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:55.448 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:55.448 [19/37] Compiling C object samples/server.p/server.c.o 00:01:55.448 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:55.448 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:55.448 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:55.448 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:55.448 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:55.448 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:55.448 [26/37] Compiling C object samples/client.p/client.c.o 00:01:55.448 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:55.448 [28/37] Linking target samples/client 00:01:55.448 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:55.708 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:55.708 [31/37] Linking target test/unit_tests 00:01:55.708 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:55.708 [33/37] Linking target samples/lspci 00:01:55.708 [34/37] Linking target samples/null 00:01:55.708 [35/37] Linking target samples/gpio-pci-idio-16 00:01:55.708 [36/37] Linking target samples/server 00:01:55.708 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:55.708 INFO: autodetecting backend as ninja 00:01:55.708 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.971 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:56.920 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:56.920 ninja: no work to do. 00:02:02.191 The Meson build system 00:02:02.191 Version: 1.5.0 00:02:02.191 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:02.191 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:02.191 Build type: native build 00:02:02.191 Program cat found: YES (/usr/bin/cat) 00:02:02.191 Project name: DPDK 00:02:02.191 Project version: 24.03.0 00:02:02.191 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.191 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.191 Host machine cpu family: x86_64 00:02:02.191 Host machine cpu: x86_64 00:02:02.191 Message: ## Building in Developer Mode ## 00:02:02.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.191 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.191 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.191 Program python3 found: YES (/usr/bin/python3) 00:02:02.191 Program cat found: YES (/usr/bin/cat) 00:02:02.191 Compiler for C supports arguments -march=native: YES 00:02:02.192 Checking for size of "void *" : 8 00:02:02.192 Checking for size of "void *" : 8 (cached) 00:02:02.192 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.192 Library m found: YES 00:02:02.192 Library numa found: YES 00:02:02.192 Has header "numaif.h" : YES 00:02:02.192 Library fdt found: NO 00:02:02.192 Library execinfo found: NO 00:02:02.192 Has header "execinfo.h" : YES 00:02:02.192 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.192 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.192 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.192 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.192 Run-time dependency openssl found: YES 3.1.1 00:02:02.192 Run-time dependency libpcap found: YES 1.10.4 00:02:02.192 Has header "pcap.h" with dependency libpcap: YES 00:02:02.192 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.192 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.192 Compiler for C supports arguments -Wformat: YES 00:02:02.192 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.192 Compiler for C supports arguments -Wformat-security: NO 00:02:02.192 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.192 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.192 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.192 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.192 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.192 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.192 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.192 Compiler for C supports arguments -Wundef: YES 00:02:02.192 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.192 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.192 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.192 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.192 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.192 Program objdump found: YES (/usr/bin/objdump) 00:02:02.192 Compiler for C supports arguments -mavx512f: YES 00:02:02.192 Checking if "AVX512 checking" compiles: YES 00:02:02.192 Fetching value of define "__SSE4_2__" : 1 00:02:02.192 Fetching value of define "__AES__" : 1 00:02:02.192 Fetching value of define "__AVX__" : 1 00:02:02.192 Fetching value of define "__AVX2__" : (undefined) 00:02:02.192 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.192 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.192 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.192 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.192 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.192 Fetching value of define "__PCLMUL__" : 1 00:02:02.192 Fetching value of define "__RDRND__" : 1 00:02:02.192 Fetching value of define "__RDSEED__" : (undefined) 00:02:02.192 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.192 Fetching value of define "__znver1__" : (undefined) 00:02:02.192 Fetching value of define "__znver2__" : (undefined) 00:02:02.192 Fetching value of define "__znver3__" : (undefined) 00:02:02.192 Fetching value of define "__znver4__" : (undefined) 00:02:02.192 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.192 Message: lib/log: Defining dependency "log" 00:02:02.192 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.192 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.192 Checking for function "getentropy" : NO 00:02:02.192 Message: lib/eal: Defining dependency "eal" 00:02:02.192 Message: lib/ring: Defining dependency "ring" 00:02:02.192 Message: lib/rcu: Defining dependency "rcu" 00:02:02.192 Message: lib/mempool: Defining dependency "mempool" 00:02:02.192 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.192 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.192 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.192 Compiler for C supports arguments -mpclmul: YES 00:02:02.192 Compiler for C supports arguments -maes: YES 00:02:02.192 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.192 Compiler for C supports arguments -mavx512bw: YES 00:02:02.192 Compiler for C supports arguments -mavx512dq: YES 00:02:02.192 Compiler for C supports arguments -mavx512vl: YES 00:02:02.192 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.192 Compiler for C supports arguments -mavx2: YES 00:02:02.192 Compiler for C supports arguments -mavx: YES 00:02:02.192 Message: lib/net: Defining dependency "net" 00:02:02.192 Message: lib/meter: Defining dependency "meter" 00:02:02.192 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.192 Message: lib/pci: Defining dependency "pci" 00:02:02.192 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.192 Message: lib/hash: Defining dependency "hash" 00:02:02.192 Message: lib/timer: Defining dependency "timer" 00:02:02.192 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.192 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.192 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.192 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.192 Message: lib/power: Defining dependency "power" 00:02:02.192 Message: lib/reorder: Defining dependency "reorder" 00:02:02.192 Message: lib/security: Defining dependency "security" 00:02:02.192 Has header "linux/userfaultfd.h" : YES 00:02:02.192 Has header "linux/vduse.h" : YES 00:02:02.192 Message: lib/vhost: Defining dependency "vhost" 00:02:02.192 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.192 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.192 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.192 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.192 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.192 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.192 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.192 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.192 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.192 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.192 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.192 Configuring doxy-api-html.conf using configuration 00:02:02.192 Configuring doxy-api-man.conf using configuration 00:02:02.192 Program mandb found: YES (/usr/bin/mandb) 00:02:02.192 Program sphinx-build found: NO 00:02:02.192 Configuring rte_build_config.h using configuration 00:02:02.192 Message: 00:02:02.192 ================= 00:02:02.192 Applications Enabled 00:02:02.192 ================= 00:02:02.192 00:02:02.192 apps: 00:02:02.192 00:02:02.192 00:02:02.192 Message: 00:02:02.192 ================= 00:02:02.192 Libraries Enabled 00:02:02.192 ================= 00:02:02.192 00:02:02.192 libs: 00:02:02.192 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.192 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.192 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.192 00:02:02.192 Message: 00:02:02.192 =============== 00:02:02.192 Drivers Enabled 00:02:02.192 =============== 00:02:02.192 00:02:02.192 common: 00:02:02.192 00:02:02.192 bus: 00:02:02.192 pci, vdev, 00:02:02.192 mempool: 00:02:02.192 ring, 00:02:02.192 dma: 00:02:02.192 00:02:02.192 net: 00:02:02.192 00:02:02.192 crypto: 00:02:02.192 00:02:02.192 compress: 00:02:02.192 00:02:02.192 vdpa: 00:02:02.192 00:02:02.192 00:02:02.192 Message: 00:02:02.192 ================= 00:02:02.192 Content Skipped 00:02:02.192 ================= 00:02:02.192 00:02:02.192 apps: 00:02:02.192 dumpcap: explicitly disabled via build config 00:02:02.192 graph: explicitly disabled via build config 00:02:02.193 pdump: explicitly disabled via build config 00:02:02.193 proc-info: explicitly disabled via build config 00:02:02.193 test-acl: explicitly disabled via build config 00:02:02.193 test-bbdev: explicitly disabled via build config 00:02:02.193 test-cmdline: explicitly disabled via build config 00:02:02.193 test-compress-perf: explicitly disabled via build config 00:02:02.193 test-crypto-perf: explicitly disabled via build config 00:02:02.193 test-dma-perf: explicitly disabled via build config 00:02:02.193 test-eventdev: explicitly disabled via build config 00:02:02.193 test-fib: explicitly disabled via build config 00:02:02.193 test-flow-perf: explicitly disabled via build config 00:02:02.193 test-gpudev: explicitly disabled via build config 00:02:02.193 test-mldev: explicitly disabled via build config 00:02:02.193 test-pipeline: explicitly disabled via build config 00:02:02.193 test-pmd: explicitly disabled via build config 00:02:02.193 test-regex: explicitly disabled via build config 00:02:02.193 test-sad: explicitly disabled via build config 00:02:02.193 test-security-perf: explicitly disabled via build config 00:02:02.193 00:02:02.193 libs: 00:02:02.193 argparse: explicitly disabled via build config 00:02:02.193 metrics: explicitly disabled via build config 00:02:02.193 acl: explicitly disabled via build config 00:02:02.193 bbdev: explicitly disabled via build config 00:02:02.193 bitratestats: explicitly disabled via build config 00:02:02.193 bpf: explicitly disabled via build config 00:02:02.193 cfgfile: explicitly disabled via build config 00:02:02.193 distributor: explicitly disabled via build config 00:02:02.193 efd: explicitly disabled via build config 00:02:02.193 eventdev: explicitly disabled via build config 00:02:02.193 dispatcher: explicitly disabled via build config 00:02:02.193 gpudev: explicitly disabled via build config 00:02:02.193 gro: explicitly disabled via build config 00:02:02.193 gso: explicitly disabled via build config 00:02:02.193 ip_frag: explicitly disabled via build config 00:02:02.193 jobstats: explicitly disabled via build config 00:02:02.193 latencystats: explicitly disabled via build config 00:02:02.193 lpm: explicitly disabled via build config 00:02:02.193 member: explicitly disabled via build config 00:02:02.193 pcapng: explicitly disabled via build config 00:02:02.193 rawdev: explicitly disabled via build config 00:02:02.193 regexdev: explicitly disabled via build config 00:02:02.193 mldev: explicitly disabled via build config 00:02:02.193 rib: explicitly disabled via build config 00:02:02.193 sched: explicitly disabled via build config 00:02:02.193 stack: explicitly disabled via build config 00:02:02.193 ipsec: explicitly disabled via build config 00:02:02.193 pdcp: explicitly disabled via build config 00:02:02.193 fib: explicitly disabled via build config 00:02:02.193 port: explicitly disabled via build config 00:02:02.193 pdump: explicitly disabled via build config 00:02:02.193 table: explicitly disabled via build config 00:02:02.193 pipeline: explicitly disabled via build config 00:02:02.193 graph: explicitly disabled via build config 00:02:02.193 node: explicitly disabled via build config 00:02:02.193 00:02:02.193 drivers: 00:02:02.193 common/cpt: not in enabled drivers build config 00:02:02.193 common/dpaax: not in enabled drivers build config 00:02:02.193 common/iavf: not in enabled drivers build config 00:02:02.193 common/idpf: not in enabled drivers build config 00:02:02.193 common/ionic: not in enabled drivers build config 00:02:02.193 common/mvep: not in enabled drivers build config 00:02:02.193 common/octeontx: not in enabled drivers build config 00:02:02.193 bus/auxiliary: not in enabled drivers build config 00:02:02.193 bus/cdx: not in enabled drivers build config 00:02:02.193 bus/dpaa: not in enabled drivers build config 00:02:02.193 bus/fslmc: not in enabled drivers build config 00:02:02.193 bus/ifpga: not in enabled drivers build config 00:02:02.193 bus/platform: not in enabled drivers build config 00:02:02.193 bus/uacce: not in enabled drivers build config 00:02:02.193 bus/vmbus: not in enabled drivers build config 00:02:02.193 common/cnxk: not in enabled drivers build config 00:02:02.193 common/mlx5: not in enabled drivers build config 00:02:02.193 common/nfp: not in enabled drivers build config 00:02:02.193 common/nitrox: not in enabled drivers build config 00:02:02.193 common/qat: not in enabled drivers build config 00:02:02.193 common/sfc_efx: not in enabled drivers build config 00:02:02.193 mempool/bucket: not in enabled drivers build config 00:02:02.193 mempool/cnxk: not in enabled drivers build config 00:02:02.193 mempool/dpaa: not in enabled drivers build config 00:02:02.193 mempool/dpaa2: not in enabled drivers build config 00:02:02.193 mempool/octeontx: not in enabled drivers build config 00:02:02.193 mempool/stack: not in enabled drivers build config 00:02:02.193 dma/cnxk: not in enabled drivers build config 00:02:02.193 dma/dpaa: not in enabled drivers build config 00:02:02.193 dma/dpaa2: not in enabled drivers build config 00:02:02.193 dma/hisilicon: not in enabled drivers build config 00:02:02.193 dma/idxd: not in enabled drivers build config 00:02:02.193 dma/ioat: not in enabled drivers build config 00:02:02.193 dma/skeleton: not in enabled drivers build config 00:02:02.193 net/af_packet: not in enabled drivers build config 00:02:02.193 net/af_xdp: not in enabled drivers build config 00:02:02.193 net/ark: not in enabled drivers build config 00:02:02.193 net/atlantic: not in enabled drivers build config 00:02:02.193 net/avp: not in enabled drivers build config 00:02:02.193 net/axgbe: not in enabled drivers build config 00:02:02.193 net/bnx2x: not in enabled drivers build config 00:02:02.193 net/bnxt: not in enabled drivers build config 00:02:02.193 net/bonding: not in enabled drivers build config 00:02:02.193 net/cnxk: not in enabled drivers build config 00:02:02.193 net/cpfl: not in enabled drivers build config 00:02:02.193 net/cxgbe: not in enabled drivers build config 00:02:02.193 net/dpaa: not in enabled drivers build config 00:02:02.193 net/dpaa2: not in enabled drivers build config 00:02:02.193 net/e1000: not in enabled drivers build config 00:02:02.193 net/ena: not in enabled drivers build config 00:02:02.193 net/enetc: not in enabled drivers build config 00:02:02.193 net/enetfec: not in enabled drivers build config 00:02:02.193 net/enic: not in enabled drivers build config 00:02:02.193 net/failsafe: not in enabled drivers build config 00:02:02.193 net/fm10k: not in enabled drivers build config 00:02:02.193 net/gve: not in enabled drivers build config 00:02:02.193 net/hinic: not in enabled drivers build config 00:02:02.193 net/hns3: not in enabled drivers build config 00:02:02.193 net/i40e: not in enabled drivers build config 00:02:02.193 net/iavf: not in enabled drivers build config 00:02:02.193 net/ice: not in enabled drivers build config 00:02:02.193 net/idpf: not in enabled drivers build config 00:02:02.193 net/igc: not in enabled drivers build config 00:02:02.193 net/ionic: not in enabled drivers build config 00:02:02.193 net/ipn3ke: not in enabled drivers build config 00:02:02.193 net/ixgbe: not in enabled drivers build config 00:02:02.193 net/mana: not in enabled drivers build config 00:02:02.193 net/memif: not in enabled drivers build config 00:02:02.193 net/mlx4: not in enabled drivers build config 00:02:02.193 net/mlx5: not in enabled drivers build config 00:02:02.193 net/mvneta: not in enabled drivers build config 00:02:02.193 net/mvpp2: not in enabled drivers build config 00:02:02.193 net/netvsc: not in enabled drivers build config 00:02:02.193 net/nfb: not in enabled drivers build config 00:02:02.193 net/nfp: not in enabled drivers build config 00:02:02.193 net/ngbe: not in enabled drivers build config 00:02:02.193 net/null: not in enabled drivers build config 00:02:02.193 net/octeontx: not in enabled drivers build config 00:02:02.193 net/octeon_ep: not in enabled drivers build config 00:02:02.193 net/pcap: not in enabled drivers build config 00:02:02.193 net/pfe: not in enabled drivers build config 00:02:02.193 net/qede: not in enabled drivers build config 00:02:02.193 net/ring: not in enabled drivers build config 00:02:02.193 net/sfc: not in enabled drivers build config 00:02:02.193 net/softnic: not in enabled drivers build config 00:02:02.193 net/tap: not in enabled drivers build config 00:02:02.193 net/thunderx: not in enabled drivers build config 00:02:02.193 net/txgbe: not in enabled drivers build config 00:02:02.193 net/vdev_netvsc: not in enabled drivers build config 00:02:02.193 net/vhost: not in enabled drivers build config 00:02:02.193 net/virtio: not in enabled drivers build config 00:02:02.193 net/vmxnet3: not in enabled drivers build config 00:02:02.193 raw/*: missing internal dependency, "rawdev" 00:02:02.193 crypto/armv8: not in enabled drivers build config 00:02:02.193 crypto/bcmfs: not in enabled drivers build config 00:02:02.193 crypto/caam_jr: not in enabled drivers build config 00:02:02.193 crypto/ccp: not in enabled drivers build config 00:02:02.193 crypto/cnxk: not in enabled drivers build config 00:02:02.193 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.193 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.193 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.193 crypto/mlx5: not in enabled drivers build config 00:02:02.193 crypto/mvsam: not in enabled drivers build config 00:02:02.193 crypto/nitrox: not in enabled drivers build config 00:02:02.193 crypto/null: not in enabled drivers build config 00:02:02.193 crypto/octeontx: not in enabled drivers build config 00:02:02.193 crypto/openssl: not in enabled drivers build config 00:02:02.193 crypto/scheduler: not in enabled drivers build config 00:02:02.193 crypto/uadk: not in enabled drivers build config 00:02:02.193 crypto/virtio: not in enabled drivers build config 00:02:02.193 compress/isal: not in enabled drivers build config 00:02:02.193 compress/mlx5: not in enabled drivers build config 00:02:02.193 compress/nitrox: not in enabled drivers build config 00:02:02.193 compress/octeontx: not in enabled drivers build config 00:02:02.193 compress/zlib: not in enabled drivers build config 00:02:02.194 regex/*: missing internal dependency, "regexdev" 00:02:02.194 ml/*: missing internal dependency, "mldev" 00:02:02.194 vdpa/ifc: not in enabled drivers build config 00:02:02.194 vdpa/mlx5: not in enabled drivers build config 00:02:02.194 vdpa/nfp: not in enabled drivers build config 00:02:02.194 vdpa/sfc: not in enabled drivers build config 00:02:02.194 event/*: missing internal dependency, "eventdev" 00:02:02.194 baseband/*: missing internal dependency, "bbdev" 00:02:02.194 gpu/*: missing internal dependency, "gpudev" 00:02:02.194 00:02:02.194 00:02:02.194 Build targets in project: 85 00:02:02.194 00:02:02.194 DPDK 24.03.0 00:02:02.194 00:02:02.194 User defined options 00:02:02.194 buildtype : debug 00:02:02.194 default_library : shared 00:02:02.194 libdir : lib 00:02:02.194 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:02.194 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.194 c_link_args : 00:02:02.194 cpu_instruction_set: native 00:02:02.194 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:02.194 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:02.194 enable_docs : false 00:02:02.194 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:02.194 enable_kmods : false 00:02:02.194 max_lcores : 128 00:02:02.194 tests : false 00:02:02.194 00:02:02.194 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.194 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:02.457 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.457 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.457 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.457 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.457 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.457 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.457 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.457 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.457 [9/268] Linking static target lib/librte_kvargs.a 00:02:02.457 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.457 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:02.457 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.457 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.457 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.457 [15/268] Linking static target lib/librte_log.a 00:02:02.457 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.029 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.294 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.294 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.294 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.294 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.295 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.295 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.295 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.295 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.295 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.295 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.295 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.295 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.295 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.295 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.295 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.295 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.295 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.295 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.295 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.295 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.295 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.295 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.295 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.295 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.295 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.295 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.295 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.295 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.295 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.295 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.295 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.295 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.295 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.295 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.295 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.295 [53/268] Linking static target lib/librte_telemetry.a 00:02:03.295 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.295 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.295 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.295 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.554 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.554 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.554 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.554 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.554 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.554 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.554 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.554 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.554 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.815 [67/268] Linking target lib/librte_log.so.24.1 00:02:03.816 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.816 [69/268] Linking static target lib/librte_pci.a 00:02:03.816 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.078 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.078 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.078 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:04.078 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.078 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.078 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.078 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.078 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.078 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.078 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.078 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.337 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.337 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.337 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.337 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.337 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.337 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.338 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.338 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.338 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.338 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.338 [92/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.338 [93/268] Linking static target lib/librte_ring.a 00:02:04.338 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.338 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.338 [96/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.338 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.338 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.338 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.338 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.338 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.338 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:04.338 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.338 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.338 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.338 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.338 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.338 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.338 [109/268] Linking static target lib/librte_meter.a 00:02:04.598 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.598 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.598 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.598 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.598 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.598 [115/268] Linking static target lib/librte_mempool.a 00:02:04.598 [116/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.598 [117/268] Linking static target lib/librte_rcu.a 00:02:04.598 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.598 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.598 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.598 [121/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.598 [122/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.598 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.598 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.598 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.598 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.598 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.598 [128/268] Linking static target lib/librte_eal.a 00:02:04.861 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:04.861 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.861 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:04.861 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.861 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.861 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.861 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.861 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.861 [137/268] Linking static target lib/librte_net.a 00:02:04.861 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.120 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.120 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.120 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.120 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.120 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.120 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.120 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:05.120 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.120 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.120 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.380 [149/268] Linking static target lib/librte_cmdline.a 00:02:05.380 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.380 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.380 [152/268] Linking static target lib/librte_timer.a 00:02:05.380 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.380 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:05.380 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.380 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.380 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.380 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.638 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.638 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.638 [161/268] Linking static target lib/librte_dmadev.a 00:02:05.638 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.638 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.638 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.638 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.638 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.638 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.638 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.638 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.638 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.638 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.638 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.638 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.638 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.638 [175/268] Linking static target lib/librte_power.a 00:02:05.895 [176/268] Linking static target lib/librte_compressdev.a 00:02:05.895 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.895 [178/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.895 [179/268] Linking static target lib/librte_hash.a 00:02:05.895 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.895 [181/268] Linking static target lib/librte_reorder.a 00:02:05.895 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.895 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.895 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.895 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.895 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.895 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.152 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.152 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.152 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.152 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:06.152 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.152 [193/268] Linking static target lib/librte_mbuf.a 00:02:06.152 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.152 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:06.152 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:06.152 [197/268] Linking static target lib/librte_security.a 00:02:06.152 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.152 [199/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.152 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.152 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.152 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:06.152 [203/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.152 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:06.152 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:06.410 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.410 [207/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.410 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:06.410 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.410 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.410 [211/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.410 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:06.410 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.410 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.410 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.410 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.668 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:06.668 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.668 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.668 [220/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.668 [221/268] Linking static target drivers/librte_mempool_ring.a 00:02:06.668 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.668 [223/268] Linking static target lib/librte_cryptodev.a 00:02:06.668 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.668 [225/268] Linking static target lib/librte_ethdev.a 00:02:06.668 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.041 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.414 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.808 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.066 [230/268] Linking target lib/librte_eal.so.24.1 00:02:11.066 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.066 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:11.066 [233/268] Linking target lib/librte_ring.so.24.1 00:02:11.066 [234/268] Linking target lib/librte_timer.so.24.1 00:02:11.066 [235/268] Linking target lib/librte_meter.so.24.1 00:02:11.066 [236/268] Linking target lib/librte_pci.so.24.1 00:02:11.066 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:11.066 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:11.325 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:11.325 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:11.325 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:11.325 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:11.325 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:11.325 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:11.325 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:11.325 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:11.584 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:11.584 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:11.584 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:11.584 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:11.584 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:11.584 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:11.584 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:11.584 [254/268] Linking target lib/librte_net.so.24.1 00:02:11.584 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:11.842 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.842 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.842 [258/268] Linking target lib/librte_hash.so.24.1 00:02:11.842 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:11.842 [260/268] Linking target lib/librte_security.so.24.1 00:02:11.842 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:11.842 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:12.156 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:12.156 [264/268] Linking target lib/librte_power.so.24.1 00:02:15.438 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.438 [266/268] Linking static target lib/librte_vhost.a 00:02:16.004 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.004 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:16.004 INFO: autodetecting backend as ninja 00:02:16.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:37.989 CC lib/log/log.o 00:02:37.989 CC lib/log/log_flags.o 00:02:37.989 CC lib/log/log_deprecated.o 00:02:37.989 CC lib/ut_mock/mock.o 00:02:37.989 CC lib/ut/ut.o 00:02:37.989 LIB libspdk_log.a 00:02:37.989 LIB libspdk_ut.a 00:02:37.989 LIB libspdk_ut_mock.a 00:02:37.989 SO libspdk_log.so.7.1 00:02:37.989 SO libspdk_ut_mock.so.6.0 00:02:37.989 SO libspdk_ut.so.2.0 00:02:37.989 SYMLINK libspdk_ut_mock.so 00:02:37.989 SYMLINK libspdk_ut.so 00:02:37.989 SYMLINK libspdk_log.so 00:02:37.989 CC lib/ioat/ioat.o 00:02:37.989 CXX lib/trace_parser/trace.o 00:02:37.989 CC lib/util/base64.o 00:02:37.989 CC lib/dma/dma.o 00:02:37.989 CC lib/util/bit_array.o 00:02:37.989 CC lib/util/cpuset.o 00:02:37.989 CC lib/util/crc16.o 00:02:37.989 CC lib/util/crc32.o 00:02:37.989 CC lib/util/crc32c.o 00:02:37.989 CC lib/util/crc32_ieee.o 00:02:37.989 CC lib/util/crc64.o 00:02:37.989 CC lib/util/dif.o 00:02:37.989 CC lib/util/fd.o 00:02:37.989 CC lib/util/fd_group.o 00:02:37.989 CC lib/util/file.o 00:02:37.989 CC lib/util/hexlify.o 00:02:37.989 CC lib/util/iov.o 00:02:37.989 CC lib/util/math.o 00:02:37.989 CC lib/util/net.o 00:02:37.989 CC lib/util/pipe.o 00:02:37.989 CC lib/util/strerror_tls.o 00:02:37.989 CC lib/util/string.o 00:02:37.989 CC lib/util/xor.o 00:02:37.989 CC lib/util/uuid.o 00:02:37.989 CC lib/util/md5.o 00:02:37.989 CC lib/util/zipf.o 00:02:37.989 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.989 CC lib/vfio_user/host/vfio_user.o 00:02:37.990 LIB libspdk_dma.a 00:02:37.990 SO libspdk_dma.so.5.0 00:02:37.990 SYMLINK libspdk_dma.so 00:02:37.990 LIB libspdk_ioat.a 00:02:37.990 SO libspdk_ioat.so.7.0 00:02:37.990 SYMLINK libspdk_ioat.so 00:02:37.990 LIB libspdk_vfio_user.a 00:02:37.990 SO libspdk_vfio_user.so.5.0 00:02:37.990 SYMLINK libspdk_vfio_user.so 00:02:37.990 LIB libspdk_util.a 00:02:37.990 SO libspdk_util.so.10.1 00:02:37.990 SYMLINK libspdk_util.so 00:02:37.990 CC lib/vmd/vmd.o 00:02:37.990 CC lib/json/json_parse.o 00:02:37.990 CC lib/conf/conf.o 00:02:37.990 CC lib/vmd/led.o 00:02:37.990 CC lib/rdma_utils/rdma_utils.o 00:02:37.990 CC lib/idxd/idxd.o 00:02:37.990 CC lib/env_dpdk/env.o 00:02:37.990 CC lib/json/json_util.o 00:02:37.990 CC lib/idxd/idxd_user.o 00:02:37.990 CC lib/json/json_write.o 00:02:37.990 CC lib/env_dpdk/memory.o 00:02:37.990 CC lib/idxd/idxd_kernel.o 00:02:37.990 CC lib/env_dpdk/pci.o 00:02:37.990 CC lib/env_dpdk/init.o 00:02:37.990 CC lib/env_dpdk/threads.o 00:02:37.990 CC lib/env_dpdk/pci_ioat.o 00:02:37.990 CC lib/env_dpdk/pci_virtio.o 00:02:37.990 CC lib/env_dpdk/pci_vmd.o 00:02:37.990 CC lib/env_dpdk/pci_idxd.o 00:02:37.990 CC lib/env_dpdk/pci_event.o 00:02:37.990 CC lib/env_dpdk/sigbus_handler.o 00:02:37.990 CC lib/env_dpdk/pci_dpdk.o 00:02:37.990 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.990 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.990 LIB libspdk_trace_parser.a 00:02:37.990 SO libspdk_trace_parser.so.6.0 00:02:37.990 SYMLINK libspdk_trace_parser.so 00:02:37.990 LIB libspdk_conf.a 00:02:37.990 SO libspdk_conf.so.6.0 00:02:37.990 LIB libspdk_rdma_utils.a 00:02:37.990 SO libspdk_rdma_utils.so.1.0 00:02:37.990 SYMLINK libspdk_conf.so 00:02:37.990 LIB libspdk_json.a 00:02:37.990 SO libspdk_json.so.6.0 00:02:37.990 SYMLINK libspdk_rdma_utils.so 00:02:37.990 SYMLINK libspdk_json.so 00:02:37.990 CC lib/rdma_provider/common.o 00:02:37.990 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:37.990 CC lib/jsonrpc/jsonrpc_server.o 00:02:37.990 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:37.990 CC lib/jsonrpc/jsonrpc_client.o 00:02:37.990 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:37.990 LIB libspdk_idxd.a 00:02:37.990 LIB libspdk_vmd.a 00:02:37.990 SO libspdk_idxd.so.12.1 00:02:37.990 SO libspdk_vmd.so.6.0 00:02:37.990 SYMLINK libspdk_idxd.so 00:02:37.990 SYMLINK libspdk_vmd.so 00:02:37.990 LIB libspdk_rdma_provider.a 00:02:37.990 SO libspdk_rdma_provider.so.7.0 00:02:37.990 LIB libspdk_jsonrpc.a 00:02:37.990 SYMLINK libspdk_rdma_provider.so 00:02:37.990 SO libspdk_jsonrpc.so.6.0 00:02:37.990 SYMLINK libspdk_jsonrpc.so 00:02:37.990 CC lib/rpc/rpc.o 00:02:37.990 LIB libspdk_rpc.a 00:02:37.990 SO libspdk_rpc.so.6.0 00:02:37.990 SYMLINK libspdk_rpc.so 00:02:37.990 CC lib/notify/notify.o 00:02:37.990 CC lib/keyring/keyring.o 00:02:37.990 CC lib/notify/notify_rpc.o 00:02:37.990 CC lib/trace/trace.o 00:02:37.990 CC lib/keyring/keyring_rpc.o 00:02:37.990 CC lib/trace/trace_flags.o 00:02:37.990 CC lib/trace/trace_rpc.o 00:02:37.990 LIB libspdk_notify.a 00:02:37.990 SO libspdk_notify.so.6.0 00:02:37.990 SYMLINK libspdk_notify.so 00:02:37.990 LIB libspdk_keyring.a 00:02:37.990 SO libspdk_keyring.so.2.0 00:02:37.990 LIB libspdk_trace.a 00:02:37.990 SO libspdk_trace.so.11.0 00:02:37.990 SYMLINK libspdk_keyring.so 00:02:37.990 SYMLINK libspdk_trace.so 00:02:38.248 CC lib/thread/thread.o 00:02:38.248 CC lib/thread/iobuf.o 00:02:38.248 CC lib/sock/sock.o 00:02:38.248 CC lib/sock/sock_rpc.o 00:02:38.248 LIB libspdk_env_dpdk.a 00:02:38.248 SO libspdk_env_dpdk.so.15.1 00:02:38.504 SYMLINK libspdk_env_dpdk.so 00:02:38.761 LIB libspdk_sock.a 00:02:38.761 SO libspdk_sock.so.10.0 00:02:38.761 SYMLINK libspdk_sock.so 00:02:39.019 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.019 CC lib/nvme/nvme_ctrlr.o 00:02:39.019 CC lib/nvme/nvme_fabric.o 00:02:39.019 CC lib/nvme/nvme_ns_cmd.o 00:02:39.019 CC lib/nvme/nvme_ns.o 00:02:39.019 CC lib/nvme/nvme_pcie_common.o 00:02:39.019 CC lib/nvme/nvme_pcie.o 00:02:39.019 CC lib/nvme/nvme_qpair.o 00:02:39.019 CC lib/nvme/nvme.o 00:02:39.019 CC lib/nvme/nvme_quirks.o 00:02:39.019 CC lib/nvme/nvme_transport.o 00:02:39.019 CC lib/nvme/nvme_discovery.o 00:02:39.019 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:39.019 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:39.019 CC lib/nvme/nvme_tcp.o 00:02:39.019 CC lib/nvme/nvme_opal.o 00:02:39.019 CC lib/nvme/nvme_io_msg.o 00:02:39.019 CC lib/nvme/nvme_poll_group.o 00:02:39.019 CC lib/nvme/nvme_zns.o 00:02:39.019 CC lib/nvme/nvme_stubs.o 00:02:39.019 CC lib/nvme/nvme_auth.o 00:02:39.019 CC lib/nvme/nvme_cuse.o 00:02:39.019 CC lib/nvme/nvme_vfio_user.o 00:02:39.019 CC lib/nvme/nvme_rdma.o 00:02:39.954 LIB libspdk_thread.a 00:02:39.954 SO libspdk_thread.so.11.0 00:02:39.954 SYMLINK libspdk_thread.so 00:02:40.212 CC lib/blob/blobstore.o 00:02:40.212 CC lib/virtio/virtio.o 00:02:40.212 CC lib/accel/accel.o 00:02:40.212 CC lib/blob/request.o 00:02:40.212 CC lib/fsdev/fsdev.o 00:02:40.212 CC lib/virtio/virtio_vhost_user.o 00:02:40.212 CC lib/accel/accel_rpc.o 00:02:40.212 CC lib/blob/zeroes.o 00:02:40.212 CC lib/virtio/virtio_vfio_user.o 00:02:40.212 CC lib/accel/accel_sw.o 00:02:40.212 CC lib/blob/blob_bs_dev.o 00:02:40.212 CC lib/fsdev/fsdev_io.o 00:02:40.212 CC lib/virtio/virtio_pci.o 00:02:40.212 CC lib/fsdev/fsdev_rpc.o 00:02:40.212 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.212 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.212 CC lib/init/json_config.o 00:02:40.212 CC lib/init/subsystem.o 00:02:40.212 CC lib/init/subsystem_rpc.o 00:02:40.212 CC lib/init/rpc.o 00:02:40.470 LIB libspdk_init.a 00:02:40.470 SO libspdk_init.so.6.0 00:02:40.470 LIB libspdk_virtio.a 00:02:40.470 LIB libspdk_vfu_tgt.a 00:02:40.470 SYMLINK libspdk_init.so 00:02:40.470 SO libspdk_virtio.so.7.0 00:02:40.470 SO libspdk_vfu_tgt.so.3.0 00:02:40.470 SYMLINK libspdk_vfu_tgt.so 00:02:40.728 SYMLINK libspdk_virtio.so 00:02:40.728 CC lib/event/app.o 00:02:40.728 CC lib/event/reactor.o 00:02:40.728 CC lib/event/log_rpc.o 00:02:40.728 CC lib/event/app_rpc.o 00:02:40.728 CC lib/event/scheduler_static.o 00:02:40.986 LIB libspdk_fsdev.a 00:02:40.986 SO libspdk_fsdev.so.2.0 00:02:40.986 SYMLINK libspdk_fsdev.so 00:02:40.986 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:41.244 LIB libspdk_event.a 00:02:41.244 SO libspdk_event.so.14.0 00:02:41.244 SYMLINK libspdk_event.so 00:02:41.244 LIB libspdk_accel.a 00:02:41.244 SO libspdk_accel.so.16.0 00:02:41.502 LIB libspdk_nvme.a 00:02:41.502 SYMLINK libspdk_accel.so 00:02:41.502 SO libspdk_nvme.so.15.0 00:02:41.502 CC lib/bdev/bdev.o 00:02:41.502 CC lib/bdev/bdev_rpc.o 00:02:41.502 CC lib/bdev/bdev_zone.o 00:02:41.502 CC lib/bdev/part.o 00:02:41.502 CC lib/bdev/scsi_nvme.o 00:02:41.760 SYMLINK libspdk_nvme.so 00:02:41.760 LIB libspdk_fuse_dispatcher.a 00:02:41.760 SO libspdk_fuse_dispatcher.so.1.0 00:02:41.760 SYMLINK libspdk_fuse_dispatcher.so 00:02:43.679 LIB libspdk_blob.a 00:02:43.679 SO libspdk_blob.so.12.0 00:02:43.679 SYMLINK libspdk_blob.so 00:02:43.938 CC lib/lvol/lvol.o 00:02:43.938 CC lib/blobfs/blobfs.o 00:02:43.938 CC lib/blobfs/tree.o 00:02:44.196 LIB libspdk_bdev.a 00:02:44.196 SO libspdk_bdev.so.17.0 00:02:44.196 SYMLINK libspdk_bdev.so 00:02:44.465 CC lib/nbd/nbd.o 00:02:44.465 CC lib/ublk/ublk.o 00:02:44.465 CC lib/nbd/nbd_rpc.o 00:02:44.465 CC lib/scsi/dev.o 00:02:44.465 CC lib/ublk/ublk_rpc.o 00:02:44.465 CC lib/nvmf/ctrlr.o 00:02:44.465 CC lib/ftl/ftl_core.o 00:02:44.465 CC lib/scsi/lun.o 00:02:44.465 CC lib/nvmf/ctrlr_discovery.o 00:02:44.465 CC lib/ftl/ftl_init.o 00:02:44.465 CC lib/nvmf/ctrlr_bdev.o 00:02:44.465 CC lib/scsi/port.o 00:02:44.465 CC lib/nvmf/subsystem.o 00:02:44.465 CC lib/ftl/ftl_layout.o 00:02:44.465 CC lib/scsi/scsi.o 00:02:44.465 CC lib/nvmf/nvmf.o 00:02:44.465 CC lib/ftl/ftl_debug.o 00:02:44.465 CC lib/nvmf/nvmf_rpc.o 00:02:44.465 CC lib/scsi/scsi_pr.o 00:02:44.465 CC lib/scsi/scsi_bdev.o 00:02:44.465 CC lib/nvmf/tcp.o 00:02:44.465 CC lib/ftl/ftl_io.o 00:02:44.465 CC lib/nvmf/transport.o 00:02:44.465 CC lib/ftl/ftl_sb.o 00:02:44.465 CC lib/nvmf/stubs.o 00:02:44.465 CC lib/scsi/scsi_rpc.o 00:02:44.465 CC lib/ftl/ftl_l2p.o 00:02:44.465 CC lib/ftl/ftl_l2p_flat.o 00:02:44.465 CC lib/scsi/task.o 00:02:44.465 CC lib/nvmf/mdns_server.o 00:02:44.465 CC lib/nvmf/vfio_user.o 00:02:44.465 CC lib/ftl/ftl_nv_cache.o 00:02:44.465 CC lib/nvmf/rdma.o 00:02:44.465 CC lib/ftl/ftl_band.o 00:02:44.465 CC lib/ftl/ftl_band_ops.o 00:02:44.465 CC lib/nvmf/auth.o 00:02:44.465 CC lib/ftl/ftl_writer.o 00:02:44.465 CC lib/ftl/ftl_rq.o 00:02:44.465 CC lib/ftl/ftl_reloc.o 00:02:44.465 CC lib/ftl/ftl_l2p_cache.o 00:02:44.465 CC lib/ftl/ftl_p2l.o 00:02:44.465 CC lib/ftl/ftl_p2l_log.o 00:02:44.465 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.465 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.465 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.465 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.037 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.037 CC lib/ftl/utils/ftl_conf.o 00:02:45.037 CC lib/ftl/utils/ftl_md.o 00:02:45.037 CC lib/ftl/utils/ftl_mempool.o 00:02:45.037 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.037 CC lib/ftl/utils/ftl_property.o 00:02:45.037 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.037 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.037 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.037 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.037 LIB libspdk_blobfs.a 00:02:45.037 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.037 SO libspdk_blobfs.so.11.0 00:02:45.037 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.037 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.298 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.298 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.298 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.298 SYMLINK libspdk_blobfs.so 00:02:45.298 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.298 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:45.298 LIB libspdk_lvol.a 00:02:45.298 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:45.298 CC lib/ftl/base/ftl_base_dev.o 00:02:45.298 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.298 CC lib/ftl/ftl_trace.o 00:02:45.298 SO libspdk_lvol.so.11.0 00:02:45.298 LIB libspdk_nbd.a 00:02:45.298 SO libspdk_nbd.so.7.0 00:02:45.298 SYMLINK libspdk_lvol.so 00:02:45.557 SYMLINK libspdk_nbd.so 00:02:45.557 LIB libspdk_scsi.a 00:02:45.557 SO libspdk_scsi.so.9.0 00:02:45.557 SYMLINK libspdk_scsi.so 00:02:45.557 LIB libspdk_ublk.a 00:02:45.557 SO libspdk_ublk.so.3.0 00:02:45.557 SYMLINK libspdk_ublk.so 00:02:45.817 CC lib/iscsi/conn.o 00:02:45.817 CC lib/vhost/vhost.o 00:02:45.817 CC lib/vhost/vhost_rpc.o 00:02:45.817 CC lib/iscsi/init_grp.o 00:02:45.817 CC lib/vhost/vhost_scsi.o 00:02:45.817 CC lib/iscsi/iscsi.o 00:02:45.817 CC lib/vhost/vhost_blk.o 00:02:45.817 CC lib/iscsi/param.o 00:02:45.817 CC lib/vhost/rte_vhost_user.o 00:02:45.817 CC lib/iscsi/portal_grp.o 00:02:45.817 CC lib/iscsi/tgt_node.o 00:02:45.817 CC lib/iscsi/iscsi_subsystem.o 00:02:45.817 CC lib/iscsi/iscsi_rpc.o 00:02:45.817 CC lib/iscsi/task.o 00:02:46.076 LIB libspdk_ftl.a 00:02:46.076 SO libspdk_ftl.so.9.0 00:02:46.335 SYMLINK libspdk_ftl.so 00:02:46.901 LIB libspdk_vhost.a 00:02:47.159 SO libspdk_vhost.so.8.0 00:02:47.159 SYMLINK libspdk_vhost.so 00:02:47.159 LIB libspdk_nvmf.a 00:02:47.159 SO libspdk_nvmf.so.20.0 00:02:47.159 LIB libspdk_iscsi.a 00:02:47.159 SO libspdk_iscsi.so.8.0 00:02:47.416 SYMLINK libspdk_nvmf.so 00:02:47.416 SYMLINK libspdk_iscsi.so 00:02:47.674 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.674 CC module/vfu_device/vfu_virtio.o 00:02:47.674 CC module/vfu_device/vfu_virtio_blk.o 00:02:47.674 CC module/vfu_device/vfu_virtio_scsi.o 00:02:47.674 CC module/vfu_device/vfu_virtio_rpc.o 00:02:47.674 CC module/vfu_device/vfu_virtio_fs.o 00:02:47.674 CC module/accel/error/accel_error.o 00:02:47.674 CC module/accel/error/accel_error_rpc.o 00:02:47.674 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.674 CC module/accel/ioat/accel_ioat.o 00:02:47.674 CC module/keyring/linux/keyring.o 00:02:47.675 CC module/blob/bdev/blob_bdev.o 00:02:47.675 CC module/sock/posix/posix.o 00:02:47.675 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.675 CC module/keyring/linux/keyring_rpc.o 00:02:47.675 CC module/accel/dsa/accel_dsa.o 00:02:47.675 CC module/fsdev/aio/fsdev_aio.o 00:02:47.675 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.675 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.675 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:47.675 CC module/keyring/file/keyring.o 00:02:47.675 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.675 CC module/fsdev/aio/linux_aio_mgr.o 00:02:47.675 CC module/keyring/file/keyring_rpc.o 00:02:47.675 CC module/accel/iaa/accel_iaa.o 00:02:47.675 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.932 LIB libspdk_env_dpdk_rpc.a 00:02:47.932 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.932 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.932 LIB libspdk_keyring_file.a 00:02:47.932 LIB libspdk_scheduler_gscheduler.a 00:02:47.932 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.932 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.932 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.932 SO libspdk_keyring_file.so.2.0 00:02:47.932 LIB libspdk_keyring_linux.a 00:02:47.932 LIB libspdk_accel_ioat.a 00:02:47.932 LIB libspdk_scheduler_dynamic.a 00:02:47.932 LIB libspdk_accel_error.a 00:02:47.932 SO libspdk_keyring_linux.so.1.0 00:02:47.932 SO libspdk_accel_ioat.so.6.0 00:02:47.932 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.932 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.932 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.932 SYMLINK libspdk_keyring_file.so 00:02:47.932 SO libspdk_accel_error.so.2.0 00:02:48.191 SYMLINK libspdk_keyring_linux.so 00:02:48.191 SYMLINK libspdk_accel_ioat.so 00:02:48.191 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.191 SYMLINK libspdk_accel_error.so 00:02:48.191 LIB libspdk_blob_bdev.a 00:02:48.191 LIB libspdk_accel_dsa.a 00:02:48.191 LIB libspdk_accel_iaa.a 00:02:48.191 SO libspdk_blob_bdev.so.12.0 00:02:48.191 SO libspdk_accel_dsa.so.5.0 00:02:48.191 SO libspdk_accel_iaa.so.3.0 00:02:48.191 SYMLINK libspdk_blob_bdev.so 00:02:48.191 SYMLINK libspdk_accel_dsa.so 00:02:48.191 SYMLINK libspdk_accel_iaa.so 00:02:48.456 LIB libspdk_vfu_device.a 00:02:48.456 SO libspdk_vfu_device.so.3.0 00:02:48.456 CC module/bdev/delay/vbdev_delay.o 00:02:48.456 CC module/bdev/malloc/bdev_malloc.o 00:02:48.456 CC module/bdev/error/vbdev_error.o 00:02:48.456 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.456 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.456 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.456 CC module/bdev/null/bdev_null.o 00:02:48.456 CC module/bdev/null/bdev_null_rpc.o 00:02:48.456 CC module/bdev/raid/bdev_raid.o 00:02:48.456 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.456 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.456 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.456 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.456 CC module/bdev/gpt/gpt.o 00:02:48.456 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.456 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.456 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.456 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.456 CC module/bdev/ftl/bdev_ftl.o 00:02:48.456 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.456 CC module/bdev/split/vbdev_split.o 00:02:48.456 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.456 CC module/bdev/raid/raid0.o 00:02:48.456 CC module/bdev/raid/raid1.o 00:02:48.456 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.456 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.456 CC module/bdev/nvme/bdev_nvme.o 00:02:48.456 CC module/bdev/raid/concat.o 00:02:48.456 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.456 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.456 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.456 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.456 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.456 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.456 CC module/bdev/nvme/nvme_rpc.o 00:02:48.456 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.456 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.456 CC module/bdev/nvme/vbdev_opal.o 00:02:48.456 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.456 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.456 CC module/bdev/aio/bdev_aio.o 00:02:48.456 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.456 SYMLINK libspdk_vfu_device.so 00:02:48.714 LIB libspdk_fsdev_aio.a 00:02:48.714 SO libspdk_fsdev_aio.so.1.0 00:02:48.714 LIB libspdk_sock_posix.a 00:02:48.714 SYMLINK libspdk_fsdev_aio.so 00:02:48.714 SO libspdk_sock_posix.so.6.0 00:02:48.714 LIB libspdk_blobfs_bdev.a 00:02:48.971 SO libspdk_blobfs_bdev.so.6.0 00:02:48.971 SYMLINK libspdk_sock_posix.so 00:02:48.971 LIB libspdk_bdev_split.a 00:02:48.971 SO libspdk_bdev_split.so.6.0 00:02:48.971 LIB libspdk_bdev_null.a 00:02:48.971 SYMLINK libspdk_blobfs_bdev.so 00:02:48.971 SO libspdk_bdev_null.so.6.0 00:02:48.971 LIB libspdk_bdev_error.a 00:02:48.971 SYMLINK libspdk_bdev_split.so 00:02:48.971 LIB libspdk_bdev_gpt.a 00:02:48.971 SO libspdk_bdev_error.so.6.0 00:02:48.971 LIB libspdk_bdev_aio.a 00:02:48.971 SO libspdk_bdev_gpt.so.6.0 00:02:48.971 SYMLINK libspdk_bdev_null.so 00:02:48.971 LIB libspdk_bdev_ftl.a 00:02:48.971 LIB libspdk_bdev_malloc.a 00:02:48.971 SO libspdk_bdev_aio.so.6.0 00:02:48.971 LIB libspdk_bdev_passthru.a 00:02:48.971 LIB libspdk_bdev_iscsi.a 00:02:48.971 LIB libspdk_bdev_zone_block.a 00:02:48.971 SYMLINK libspdk_bdev_error.so 00:02:48.971 SO libspdk_bdev_ftl.so.6.0 00:02:48.971 SO libspdk_bdev_malloc.so.6.0 00:02:48.971 SO libspdk_bdev_passthru.so.6.0 00:02:48.971 SO libspdk_bdev_iscsi.so.6.0 00:02:48.971 SYMLINK libspdk_bdev_gpt.so 00:02:48.971 SO libspdk_bdev_zone_block.so.6.0 00:02:48.971 LIB libspdk_bdev_delay.a 00:02:48.971 SYMLINK libspdk_bdev_aio.so 00:02:48.971 SYMLINK libspdk_bdev_ftl.so 00:02:48.971 SYMLINK libspdk_bdev_malloc.so 00:02:48.971 SO libspdk_bdev_delay.so.6.0 00:02:48.971 SYMLINK libspdk_bdev_passthru.so 00:02:48.971 SYMLINK libspdk_bdev_iscsi.so 00:02:49.229 SYMLINK libspdk_bdev_zone_block.so 00:02:49.229 SYMLINK libspdk_bdev_delay.so 00:02:49.229 LIB libspdk_bdev_virtio.a 00:02:49.229 SO libspdk_bdev_virtio.so.6.0 00:02:49.229 LIB libspdk_bdev_lvol.a 00:02:49.229 SO libspdk_bdev_lvol.so.6.0 00:02:49.229 SYMLINK libspdk_bdev_virtio.so 00:02:49.229 SYMLINK libspdk_bdev_lvol.so 00:02:49.796 LIB libspdk_bdev_raid.a 00:02:49.796 SO libspdk_bdev_raid.so.6.0 00:02:49.796 SYMLINK libspdk_bdev_raid.so 00:02:51.172 LIB libspdk_bdev_nvme.a 00:02:51.172 SO libspdk_bdev_nvme.so.7.1 00:02:51.431 SYMLINK libspdk_bdev_nvme.so 00:02:51.690 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.690 CC module/event/subsystems/keyring/keyring.o 00:02:51.690 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.690 CC module/event/subsystems/vmd/vmd.o 00:02:51.690 CC module/event/subsystems/sock/sock.o 00:02:51.690 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.690 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.690 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.690 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.690 CC module/event/subsystems/fsdev/fsdev.o 00:02:51.690 LIB libspdk_event_keyring.a 00:02:51.948 LIB libspdk_event_vhost_blk.a 00:02:51.948 LIB libspdk_event_vfu_tgt.a 00:02:51.948 LIB libspdk_event_fsdev.a 00:02:51.948 LIB libspdk_event_scheduler.a 00:02:51.948 LIB libspdk_event_vmd.a 00:02:51.948 LIB libspdk_event_sock.a 00:02:51.948 SO libspdk_event_keyring.so.1.0 00:02:51.948 SO libspdk_event_vhost_blk.so.3.0 00:02:51.948 SO libspdk_event_vfu_tgt.so.3.0 00:02:51.948 LIB libspdk_event_iobuf.a 00:02:51.948 SO libspdk_event_fsdev.so.1.0 00:02:51.948 SO libspdk_event_scheduler.so.4.0 00:02:51.948 SO libspdk_event_sock.so.5.0 00:02:51.948 SO libspdk_event_vmd.so.6.0 00:02:51.948 SO libspdk_event_iobuf.so.3.0 00:02:51.948 SYMLINK libspdk_event_keyring.so 00:02:51.948 SYMLINK libspdk_event_vhost_blk.so 00:02:51.948 SYMLINK libspdk_event_vfu_tgt.so 00:02:51.948 SYMLINK libspdk_event_fsdev.so 00:02:51.948 SYMLINK libspdk_event_sock.so 00:02:51.948 SYMLINK libspdk_event_scheduler.so 00:02:51.948 SYMLINK libspdk_event_vmd.so 00:02:51.948 SYMLINK libspdk_event_iobuf.so 00:02:52.207 CC module/event/subsystems/accel/accel.o 00:02:52.207 LIB libspdk_event_accel.a 00:02:52.207 SO libspdk_event_accel.so.6.0 00:02:52.207 SYMLINK libspdk_event_accel.so 00:02:52.465 CC module/event/subsystems/bdev/bdev.o 00:02:52.723 LIB libspdk_event_bdev.a 00:02:52.723 SO libspdk_event_bdev.so.6.0 00:02:52.723 SYMLINK libspdk_event_bdev.so 00:02:52.982 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:52.982 CC module/event/subsystems/ublk/ublk.o 00:02:52.982 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:52.982 CC module/event/subsystems/scsi/scsi.o 00:02:52.982 CC module/event/subsystems/nbd/nbd.o 00:02:52.982 LIB libspdk_event_nbd.a 00:02:52.982 LIB libspdk_event_ublk.a 00:02:52.982 LIB libspdk_event_scsi.a 00:02:52.982 SO libspdk_event_nbd.so.6.0 00:02:52.982 SO libspdk_event_ublk.so.3.0 00:02:52.982 SO libspdk_event_scsi.so.6.0 00:02:52.982 SYMLINK libspdk_event_nbd.so 00:02:52.982 SYMLINK libspdk_event_ublk.so 00:02:53.240 SYMLINK libspdk_event_scsi.so 00:02:53.240 LIB libspdk_event_nvmf.a 00:02:53.240 SO libspdk_event_nvmf.so.6.0 00:02:53.240 SYMLINK libspdk_event_nvmf.so 00:02:53.240 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.240 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.499 LIB libspdk_event_vhost_scsi.a 00:02:53.499 LIB libspdk_event_iscsi.a 00:02:53.499 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.499 SO libspdk_event_iscsi.so.6.0 00:02:53.499 SYMLINK libspdk_event_vhost_scsi.so 00:02:53.499 SYMLINK libspdk_event_iscsi.so 00:02:53.765 SO libspdk.so.6.0 00:02:53.765 SYMLINK libspdk.so 00:02:53.765 CXX app/trace/trace.o 00:02:53.765 CC test/rpc_client/rpc_client_test.o 00:02:53.765 CC app/trace_record/trace_record.o 00:02:53.765 CC app/spdk_top/spdk_top.o 00:02:53.765 CC app/spdk_lspci/spdk_lspci.o 00:02:53.765 CC app/spdk_nvme_discover/discovery_aer.o 00:02:53.765 TEST_HEADER include/spdk/accel.h 00:02:53.765 CC app/spdk_nvme_identify/identify.o 00:02:53.765 TEST_HEADER include/spdk/accel_module.h 00:02:53.765 TEST_HEADER include/spdk/barrier.h 00:02:53.765 TEST_HEADER include/spdk/assert.h 00:02:53.765 TEST_HEADER include/spdk/base64.h 00:02:53.765 TEST_HEADER include/spdk/bdev.h 00:02:53.765 TEST_HEADER include/spdk/bdev_module.h 00:02:53.765 TEST_HEADER include/spdk/bdev_zone.h 00:02:53.765 TEST_HEADER include/spdk/bit_array.h 00:02:53.765 TEST_HEADER include/spdk/bit_pool.h 00:02:53.765 CC app/spdk_nvme_perf/perf.o 00:02:53.765 TEST_HEADER include/spdk/blob_bdev.h 00:02:53.765 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:53.765 TEST_HEADER include/spdk/blobfs.h 00:02:53.765 TEST_HEADER include/spdk/blob.h 00:02:53.765 TEST_HEADER include/spdk/conf.h 00:02:53.765 TEST_HEADER include/spdk/config.h 00:02:53.765 TEST_HEADER include/spdk/cpuset.h 00:02:53.765 TEST_HEADER include/spdk/crc16.h 00:02:53.765 TEST_HEADER include/spdk/crc32.h 00:02:53.765 TEST_HEADER include/spdk/crc64.h 00:02:53.765 TEST_HEADER include/spdk/dif.h 00:02:53.765 TEST_HEADER include/spdk/dma.h 00:02:53.765 TEST_HEADER include/spdk/endian.h 00:02:53.765 TEST_HEADER include/spdk/env_dpdk.h 00:02:53.765 TEST_HEADER include/spdk/env.h 00:02:53.765 TEST_HEADER include/spdk/event.h 00:02:53.765 TEST_HEADER include/spdk/fd_group.h 00:02:53.765 TEST_HEADER include/spdk/fd.h 00:02:53.765 TEST_HEADER include/spdk/file.h 00:02:53.765 TEST_HEADER include/spdk/fsdev.h 00:02:53.765 TEST_HEADER include/spdk/fsdev_module.h 00:02:53.765 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:53.765 TEST_HEADER include/spdk/ftl.h 00:02:53.765 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.765 TEST_HEADER include/spdk/hexlify.h 00:02:53.765 TEST_HEADER include/spdk/histogram_data.h 00:02:53.765 TEST_HEADER include/spdk/idxd.h 00:02:53.765 TEST_HEADER include/spdk/init.h 00:02:53.765 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.765 TEST_HEADER include/spdk/ioat.h 00:02:53.765 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.765 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.765 TEST_HEADER include/spdk/json.h 00:02:53.765 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.765 TEST_HEADER include/spdk/keyring.h 00:02:53.765 TEST_HEADER include/spdk/keyring_module.h 00:02:53.765 TEST_HEADER include/spdk/likely.h 00:02:53.765 TEST_HEADER include/spdk/log.h 00:02:53.765 TEST_HEADER include/spdk/lvol.h 00:02:53.765 TEST_HEADER include/spdk/md5.h 00:02:53.765 TEST_HEADER include/spdk/memory.h 00:02:53.765 TEST_HEADER include/spdk/mmio.h 00:02:53.765 TEST_HEADER include/spdk/nbd.h 00:02:53.765 TEST_HEADER include/spdk/net.h 00:02:53.765 TEST_HEADER include/spdk/nvme.h 00:02:53.765 TEST_HEADER include/spdk/notify.h 00:02:53.765 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.765 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.765 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.765 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.765 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.765 TEST_HEADER include/spdk/nvmf.h 00:02:53.765 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.765 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.765 TEST_HEADER include/spdk/opal.h 00:02:53.765 TEST_HEADER include/spdk/opal_spec.h 00:02:53.765 TEST_HEADER include/spdk/pci_ids.h 00:02:53.765 TEST_HEADER include/spdk/pipe.h 00:02:53.765 TEST_HEADER include/spdk/queue.h 00:02:53.765 TEST_HEADER include/spdk/reduce.h 00:02:53.765 TEST_HEADER include/spdk/rpc.h 00:02:53.765 TEST_HEADER include/spdk/scheduler.h 00:02:53.765 TEST_HEADER include/spdk/scsi.h 00:02:53.765 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.765 TEST_HEADER include/spdk/sock.h 00:02:53.765 TEST_HEADER include/spdk/stdinc.h 00:02:53.765 TEST_HEADER include/spdk/string.h 00:02:53.765 TEST_HEADER include/spdk/thread.h 00:02:53.765 TEST_HEADER include/spdk/trace.h 00:02:53.765 TEST_HEADER include/spdk/trace_parser.h 00:02:53.765 TEST_HEADER include/spdk/tree.h 00:02:53.765 TEST_HEADER include/spdk/ublk.h 00:02:53.765 TEST_HEADER include/spdk/util.h 00:02:53.765 TEST_HEADER include/spdk/uuid.h 00:02:53.765 TEST_HEADER include/spdk/version.h 00:02:53.765 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.765 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.765 TEST_HEADER include/spdk/vhost.h 00:02:53.765 TEST_HEADER include/spdk/vmd.h 00:02:53.765 TEST_HEADER include/spdk/xor.h 00:02:53.765 TEST_HEADER include/spdk/zipf.h 00:02:53.765 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.765 CXX test/cpp_headers/accel.o 00:02:53.765 CXX test/cpp_headers/accel_module.o 00:02:53.765 CXX test/cpp_headers/assert.o 00:02:53.765 CXX test/cpp_headers/barrier.o 00:02:53.765 CXX test/cpp_headers/base64.o 00:02:53.765 CXX test/cpp_headers/bdev.o 00:02:53.765 CXX test/cpp_headers/bdev_module.o 00:02:53.765 CXX test/cpp_headers/bdev_zone.o 00:02:53.765 CXX test/cpp_headers/bit_array.o 00:02:53.765 CXX test/cpp_headers/bit_pool.o 00:02:53.765 CXX test/cpp_headers/blob_bdev.o 00:02:53.765 CXX test/cpp_headers/blobfs_bdev.o 00:02:53.765 CXX test/cpp_headers/blobfs.o 00:02:53.765 CXX test/cpp_headers/blob.o 00:02:53.765 CXX test/cpp_headers/conf.o 00:02:53.765 CXX test/cpp_headers/config.o 00:02:53.765 CXX test/cpp_headers/cpuset.o 00:02:53.765 CXX test/cpp_headers/crc16.o 00:02:54.030 CC app/spdk_dd/spdk_dd.o 00:02:54.030 CC app/nvmf_tgt/nvmf_main.o 00:02:54.030 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.030 CXX test/cpp_headers/crc32.o 00:02:54.030 CC app/spdk_tgt/spdk_tgt.o 00:02:54.030 CC test/thread/poller_perf/poller_perf.o 00:02:54.030 CC test/env/pci/pci_ut.o 00:02:54.030 CC test/app/histogram_perf/histogram_perf.o 00:02:54.030 CC examples/util/zipf/zipf.o 00:02:54.030 CC examples/ioat/verify/verify.o 00:02:54.030 CC test/env/vtophys/vtophys.o 00:02:54.030 CC test/env/memory/memory_ut.o 00:02:54.030 CC test/app/jsoncat/jsoncat.o 00:02:54.030 CC test/app/stub/stub.o 00:02:54.030 CC examples/ioat/perf/perf.o 00:02:54.030 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.030 CC app/fio/nvme/fio_plugin.o 00:02:54.030 CC test/app/bdev_svc/bdev_svc.o 00:02:54.030 CC test/dma/test_dma/test_dma.o 00:02:54.030 CC app/fio/bdev/fio_plugin.o 00:02:54.030 LINK spdk_lspci 00:02:54.030 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.030 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:54.294 LINK rpc_client_test 00:02:54.294 LINK spdk_nvme_discover 00:02:54.294 LINK poller_perf 00:02:54.294 CXX test/cpp_headers/crc64.o 00:02:54.294 LINK interrupt_tgt 00:02:54.294 CXX test/cpp_headers/dif.o 00:02:54.294 LINK histogram_perf 00:02:54.294 CXX test/cpp_headers/dma.o 00:02:54.294 LINK vtophys 00:02:54.294 LINK jsoncat 00:02:54.294 LINK zipf 00:02:54.294 CXX test/cpp_headers/endian.o 00:02:54.294 LINK nvmf_tgt 00:02:54.294 CXX test/cpp_headers/env_dpdk.o 00:02:54.294 LINK env_dpdk_post_init 00:02:54.294 CXX test/cpp_headers/env.o 00:02:54.294 CXX test/cpp_headers/event.o 00:02:54.294 CXX test/cpp_headers/fd_group.o 00:02:54.294 CXX test/cpp_headers/fd.o 00:02:54.294 CXX test/cpp_headers/file.o 00:02:54.294 LINK spdk_trace_record 00:02:54.294 LINK stub 00:02:54.294 CXX test/cpp_headers/fsdev.o 00:02:54.294 LINK iscsi_tgt 00:02:54.556 CXX test/cpp_headers/fsdev_module.o 00:02:54.556 CXX test/cpp_headers/ftl.o 00:02:54.556 CXX test/cpp_headers/fuse_dispatcher.o 00:02:54.556 LINK bdev_svc 00:02:54.556 LINK spdk_tgt 00:02:54.556 CXX test/cpp_headers/gpt_spec.o 00:02:54.556 LINK verify 00:02:54.556 LINK ioat_perf 00:02:54.556 CXX test/cpp_headers/hexlify.o 00:02:54.556 CXX test/cpp_headers/histogram_data.o 00:02:54.556 CXX test/cpp_headers/idxd.o 00:02:54.556 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.556 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:54.556 CXX test/cpp_headers/idxd_spec.o 00:02:54.556 CXX test/cpp_headers/init.o 00:02:54.556 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.556 CXX test/cpp_headers/ioat.o 00:02:54.556 CXX test/cpp_headers/ioat_spec.o 00:02:54.556 CXX test/cpp_headers/iscsi_spec.o 00:02:54.556 CXX test/cpp_headers/json.o 00:02:54.821 LINK spdk_dd 00:02:54.821 CXX test/cpp_headers/jsonrpc.o 00:02:54.821 CXX test/cpp_headers/keyring.o 00:02:54.821 CXX test/cpp_headers/keyring_module.o 00:02:54.821 CXX test/cpp_headers/likely.o 00:02:54.821 CXX test/cpp_headers/log.o 00:02:54.821 CXX test/cpp_headers/lvol.o 00:02:54.821 CXX test/cpp_headers/md5.o 00:02:54.821 LINK pci_ut 00:02:54.821 CXX test/cpp_headers/memory.o 00:02:54.821 CXX test/cpp_headers/mmio.o 00:02:54.821 LINK spdk_trace 00:02:54.821 CXX test/cpp_headers/nbd.o 00:02:54.821 CXX test/cpp_headers/net.o 00:02:54.821 CXX test/cpp_headers/notify.o 00:02:54.821 CXX test/cpp_headers/nvme.o 00:02:54.821 CXX test/cpp_headers/nvme_intel.o 00:02:54.821 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.821 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:54.821 CXX test/cpp_headers/nvme_spec.o 00:02:54.821 CXX test/cpp_headers/nvme_zns.o 00:02:54.821 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.821 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.085 CC test/event/reactor/reactor.o 00:02:55.085 CC test/event/reactor_perf/reactor_perf.o 00:02:55.085 CC test/event/event_perf/event_perf.o 00:02:55.085 CXX test/cpp_headers/nvmf.o 00:02:55.085 LINK nvme_fuzz 00:02:55.085 LINK spdk_nvme 00:02:55.085 CC test/event/app_repeat/app_repeat.o 00:02:55.085 CXX test/cpp_headers/nvmf_spec.o 00:02:55.085 CXX test/cpp_headers/nvmf_transport.o 00:02:55.085 CC test/event/scheduler/scheduler.o 00:02:55.085 CXX test/cpp_headers/opal.o 00:02:55.085 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.085 CC examples/sock/hello_world/hello_sock.o 00:02:55.085 CC examples/idxd/perf/perf.o 00:02:55.085 CXX test/cpp_headers/opal_spec.o 00:02:55.085 CXX test/cpp_headers/pci_ids.o 00:02:55.085 CC examples/thread/thread/thread_ex.o 00:02:55.085 LINK test_dma 00:02:55.085 CXX test/cpp_headers/pipe.o 00:02:55.085 CXX test/cpp_headers/queue.o 00:02:55.085 CXX test/cpp_headers/rpc.o 00:02:55.085 CXX test/cpp_headers/reduce.o 00:02:55.085 CC examples/vmd/led/led.o 00:02:55.085 LINK spdk_bdev 00:02:55.085 CXX test/cpp_headers/scheduler.o 00:02:55.085 CXX test/cpp_headers/scsi.o 00:02:55.085 CXX test/cpp_headers/scsi_spec.o 00:02:55.085 CXX test/cpp_headers/sock.o 00:02:55.085 CXX test/cpp_headers/stdinc.o 00:02:55.346 CXX test/cpp_headers/string.o 00:02:55.346 CXX test/cpp_headers/thread.o 00:02:55.346 CXX test/cpp_headers/trace.o 00:02:55.346 CXX test/cpp_headers/trace_parser.o 00:02:55.346 CXX test/cpp_headers/tree.o 00:02:55.346 LINK reactor_perf 00:02:55.346 LINK reactor 00:02:55.346 CXX test/cpp_headers/ublk.o 00:02:55.346 CXX test/cpp_headers/util.o 00:02:55.346 CXX test/cpp_headers/uuid.o 00:02:55.346 CXX test/cpp_headers/version.o 00:02:55.346 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.346 LINK event_perf 00:02:55.346 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.346 CXX test/cpp_headers/vhost.o 00:02:55.346 CXX test/cpp_headers/vmd.o 00:02:55.346 LINK lsvmd 00:02:55.346 CXX test/cpp_headers/xor.o 00:02:55.346 CXX test/cpp_headers/zipf.o 00:02:55.346 LINK mem_callbacks 00:02:55.346 LINK app_repeat 00:02:55.346 CC app/vhost/vhost.o 00:02:55.346 LINK spdk_nvme_perf 00:02:55.346 LINK vhost_fuzz 00:02:55.346 LINK led 00:02:55.607 LINK spdk_nvme_identify 00:02:55.607 LINK scheduler 00:02:55.607 LINK spdk_top 00:02:55.607 LINK hello_sock 00:02:55.607 LINK thread 00:02:55.607 CC test/nvme/aer/aer.o 00:02:55.865 CC test/nvme/reset/reset.o 00:02:55.865 CC test/nvme/e2edp/nvme_dp.o 00:02:55.865 CC test/nvme/boot_partition/boot_partition.o 00:02:55.865 CC test/nvme/compliance/nvme_compliance.o 00:02:55.865 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.865 CC test/nvme/connect_stress/connect_stress.o 00:02:55.865 CC test/nvme/overhead/overhead.o 00:02:55.865 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.865 CC test/nvme/cuse/cuse.o 00:02:55.865 CC test/nvme/startup/startup.o 00:02:55.865 CC test/nvme/reserve/reserve.o 00:02:55.865 CC test/nvme/sgl/sgl.o 00:02:55.865 CC test/nvme/err_injection/err_injection.o 00:02:55.865 CC test/nvme/fdp/fdp.o 00:02:55.865 CC test/nvme/simple_copy/simple_copy.o 00:02:55.865 LINK idxd_perf 00:02:55.865 LINK vhost 00:02:55.866 CC test/blobfs/mkfs/mkfs.o 00:02:55.866 CC test/accel/dif/dif.o 00:02:55.866 CC test/lvol/esnap/esnap.o 00:02:56.125 LINK err_injection 00:02:56.125 LINK doorbell_aers 00:02:56.125 CC examples/nvme/reconnect/reconnect.o 00:02:56.125 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.125 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.125 CC examples/nvme/hello_world/hello_world.o 00:02:56.125 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.125 CC examples/nvme/hotplug/hotplug.o 00:02:56.125 CC examples/nvme/abort/abort.o 00:02:56.125 CC examples/nvme/arbitration/arbitration.o 00:02:56.125 LINK reserve 00:02:56.125 LINK simple_copy 00:02:56.125 LINK mkfs 00:02:56.125 LINK boot_partition 00:02:56.125 LINK connect_stress 00:02:56.125 LINK aer 00:02:56.125 LINK startup 00:02:56.125 LINK sgl 00:02:56.125 LINK nvme_dp 00:02:56.125 LINK reset 00:02:56.125 LINK fused_ordering 00:02:56.125 LINK memory_ut 00:02:56.125 LINK overhead 00:02:56.125 LINK fdp 00:02:56.125 CC examples/accel/perf/accel_perf.o 00:02:56.125 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:56.125 CC examples/blob/cli/blobcli.o 00:02:56.125 CC examples/blob/hello_world/hello_blob.o 00:02:56.383 LINK nvme_compliance 00:02:56.383 LINK pmr_persistence 00:02:56.383 LINK hello_world 00:02:56.383 LINK cmb_copy 00:02:56.383 LINK hotplug 00:02:56.383 LINK reconnect 00:02:56.642 LINK abort 00:02:56.642 LINK arbitration 00:02:56.642 LINK dif 00:02:56.642 LINK hello_blob 00:02:56.642 LINK hello_fsdev 00:02:56.642 LINK nvme_manage 00:02:56.901 LINK accel_perf 00:02:56.901 LINK blobcli 00:02:56.901 CC test/bdev/bdevio/bdevio.o 00:02:57.160 LINK iscsi_fuzz 00:02:57.160 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.160 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.418 LINK cuse 00:02:57.418 LINK hello_bdev 00:02:57.418 LINK bdevio 00:02:58.021 LINK bdevperf 00:02:58.305 CC examples/nvmf/nvmf/nvmf.o 00:02:58.563 LINK nvmf 00:03:01.848 LINK esnap 00:03:02.106 00:03:02.106 real 1m9.959s 00:03:02.106 user 11m52.938s 00:03:02.106 sys 2m37.730s 00:03:02.106 20:43:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:02.106 20:43:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.106 ************************************ 00:03:02.106 END TEST make 00:03:02.106 ************************************ 00:03:02.106 20:43:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.106 20:43:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.106 20:43:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.106 20:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.106 20:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.106 20:43:52 -- pm/common@44 -- $ pid=3771518 00:03:02.106 20:43:52 -- pm/common@50 -- $ kill -TERM 3771518 00:03:02.106 20:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.106 20:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.106 20:43:52 -- pm/common@44 -- $ pid=3771520 00:03:02.106 20:43:52 -- pm/common@50 -- $ kill -TERM 3771520 00:03:02.106 20:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.106 20:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:02.106 20:43:52 -- pm/common@44 -- $ pid=3771522 00:03:02.106 20:43:52 -- pm/common@50 -- $ kill -TERM 3771522 00:03:02.106 20:43:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.106 20:43:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:02.106 20:43:52 -- pm/common@44 -- $ pid=3771551 00:03:02.106 20:43:52 -- pm/common@50 -- $ sudo -E kill -TERM 3771551 00:03:02.106 20:43:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:02.106 20:43:52 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:02.106 20:43:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:02.106 20:43:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:02.106 20:43:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:02.106 20:43:53 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:02.106 20:43:53 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:02.106 20:43:53 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:02.106 20:43:53 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:02.106 20:43:53 -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.106 20:43:53 -- scripts/common.sh@336 -- # read -ra ver1 00:03:02.106 20:43:53 -- scripts/common.sh@337 -- # IFS=.-: 00:03:02.106 20:43:53 -- scripts/common.sh@337 -- # read -ra ver2 00:03:02.106 20:43:53 -- scripts/common.sh@338 -- # local 'op=<' 00:03:02.106 20:43:53 -- scripts/common.sh@340 -- # ver1_l=2 00:03:02.106 20:43:53 -- scripts/common.sh@341 -- # ver2_l=1 00:03:02.106 20:43:53 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:02.106 20:43:53 -- scripts/common.sh@344 -- # case "$op" in 00:03:02.106 20:43:53 -- scripts/common.sh@345 -- # : 1 00:03:02.106 20:43:53 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:02.106 20:43:53 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.106 20:43:53 -- scripts/common.sh@365 -- # decimal 1 00:03:02.106 20:43:53 -- scripts/common.sh@353 -- # local d=1 00:03:02.106 20:43:53 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.106 20:43:53 -- scripts/common.sh@355 -- # echo 1 00:03:02.106 20:43:53 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:02.106 20:43:53 -- scripts/common.sh@366 -- # decimal 2 00:03:02.106 20:43:53 -- scripts/common.sh@353 -- # local d=2 00:03:02.106 20:43:53 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.106 20:43:53 -- scripts/common.sh@355 -- # echo 2 00:03:02.366 20:43:53 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:02.366 20:43:53 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:02.366 20:43:53 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:02.366 20:43:53 -- scripts/common.sh@368 -- # return 0 00:03:02.366 20:43:53 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.366 20:43:53 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:02.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.366 --rc genhtml_branch_coverage=1 00:03:02.366 --rc genhtml_function_coverage=1 00:03:02.366 --rc genhtml_legend=1 00:03:02.366 --rc geninfo_all_blocks=1 00:03:02.366 --rc geninfo_unexecuted_blocks=1 00:03:02.366 00:03:02.366 ' 00:03:02.366 20:43:53 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:02.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.366 --rc genhtml_branch_coverage=1 00:03:02.366 --rc genhtml_function_coverage=1 00:03:02.366 --rc genhtml_legend=1 00:03:02.366 --rc geninfo_all_blocks=1 00:03:02.366 --rc geninfo_unexecuted_blocks=1 00:03:02.366 00:03:02.366 ' 00:03:02.366 20:43:53 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:02.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.366 --rc genhtml_branch_coverage=1 00:03:02.366 --rc genhtml_function_coverage=1 00:03:02.366 --rc genhtml_legend=1 00:03:02.366 --rc geninfo_all_blocks=1 00:03:02.366 --rc geninfo_unexecuted_blocks=1 00:03:02.366 00:03:02.366 ' 00:03:02.366 20:43:53 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:02.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.366 --rc genhtml_branch_coverage=1 00:03:02.366 --rc genhtml_function_coverage=1 00:03:02.366 --rc genhtml_legend=1 00:03:02.366 --rc geninfo_all_blocks=1 00:03:02.366 --rc geninfo_unexecuted_blocks=1 00:03:02.366 00:03:02.366 ' 00:03:02.366 20:43:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:02.366 20:43:53 -- nvmf/common.sh@7 -- # uname -s 00:03:02.366 20:43:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.366 20:43:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.366 20:43:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.366 20:43:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.366 20:43:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.366 20:43:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.366 20:43:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.366 20:43:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.366 20:43:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.366 20:43:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.366 20:43:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:02.366 20:43:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:02.366 20:43:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.366 20:43:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.366 20:43:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:02.366 20:43:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.366 20:43:53 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:02.366 20:43:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:02.366 20:43:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.366 20:43:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.366 20:43:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.366 20:43:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.366 20:43:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.366 20:43:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.366 20:43:53 -- paths/export.sh@5 -- # export PATH 00:03:02.366 20:43:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.366 20:43:53 -- nvmf/common.sh@51 -- # : 0 00:03:02.366 20:43:53 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:02.366 20:43:53 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:02.366 20:43:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.366 20:43:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.366 20:43:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.366 20:43:53 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:02.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:02.366 20:43:53 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:02.366 20:43:53 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:02.366 20:43:53 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:02.366 20:43:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.366 20:43:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.366 20:43:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.366 20:43:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.366 20:43:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.366 20:43:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.366 20:43:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.366 20:43:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.366 20:43:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.366 20:43:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.366 20:43:53 -- spdk/autotest.sh@48 -- # udevadm_pid=3830979 00:03:02.366 20:43:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.366 20:43:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.366 20:43:53 -- pm/common@17 -- # local monitor 00:03:02.367 20:43:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.367 20:43:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.367 20:43:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.367 20:43:53 -- pm/common@21 -- # date +%s 00:03:02.367 20:43:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.367 20:43:53 -- pm/common@21 -- # date +%s 00:03:02.367 20:43:53 -- pm/common@25 -- # sleep 1 00:03:02.367 20:43:53 -- pm/common@21 -- # date +%s 00:03:02.367 20:43:53 -- pm/common@21 -- # date +%s 00:03:02.367 20:43:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732650233 00:03:02.367 20:43:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732650233 00:03:02.367 20:43:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732650233 00:03:02.367 20:43:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732650233 00:03:02.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732650233_collect-vmstat.pm.log 00:03:02.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732650233_collect-cpu-load.pm.log 00:03:02.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732650233_collect-cpu-temp.pm.log 00:03:02.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732650233_collect-bmc-pm.bmc.pm.log 00:03:03.303 20:43:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.304 20:43:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.304 20:43:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:03.304 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:03.304 20:43:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.304 20:43:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:03.304 20:43:54 -- common/autotest_common.sh@10 -- # set +x 00:03:03.304 20:43:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:03.304 20:43:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.304 20:43:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.304 20:43:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:03.304 20:43:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.304 20:43:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.304 20:43:54 -- common/autotest_common.sh@1457 -- # uname 00:03:03.304 20:43:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:03.304 20:43:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.304 20:43:54 -- common/autotest_common.sh@1477 -- # uname 00:03:03.304 20:43:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:03.304 20:43:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:03.304 20:43:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:03.304 lcov: LCOV version 1.15 00:03:03.304 20:43:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:21.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:43.296 20:44:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:43.296 20:44:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.296 20:44:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.296 20:44:31 -- spdk/autotest.sh@78 -- # rm -f 00:03:43.296 20:44:31 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.296 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:43.296 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:43.296 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:43.296 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:43.296 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:43.296 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:43.296 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:43.296 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:43.296 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:43.296 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:43.296 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:43.296 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:43.296 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:43.296 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:43.296 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:43.296 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:43.296 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:43.296 20:44:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:43.296 20:44:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:43.296 20:44:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:43.296 20:44:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:43.296 20:44:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.296 20:44:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:43.296 20:44:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:43.296 20:44:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.296 20:44:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.296 20:44:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:43.296 20:44:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.296 20:44:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.296 20:44:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:43.296 20:44:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:43.296 20:44:32 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.296 No valid GPT data, bailing 00:03:43.296 20:44:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.296 20:44:32 -- scripts/common.sh@394 -- # pt= 00:03:43.296 20:44:32 -- scripts/common.sh@395 -- # return 1 00:03:43.296 20:44:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:43.296 1+0 records in 00:03:43.296 1+0 records out 00:03:43.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00191957 s, 546 MB/s 00:03:43.296 20:44:32 -- spdk/autotest.sh@105 -- # sync 00:03:43.296 20:44:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:43.296 20:44:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:43.296 20:44:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.861 20:44:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:43.861 20:44:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:43.861 20:44:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:43.861 20:44:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:44.793 Hugepages 00:03:44.793 node hugesize free / total 00:03:44.793 node0 1048576kB 0 / 0 00:03:44.793 node0 2048kB 0 / 0 00:03:44.793 node1 1048576kB 0 / 0 00:03:44.793 node1 2048kB 0 / 0 00:03:44.793 00:03:44.793 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.793 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:44.793 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:44.793 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.050 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:45.050 20:44:35 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.050 20:44:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.050 20:44:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.050 20:44:35 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.984 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:45.984 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:45.984 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.243 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.180 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.180 20:44:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:48.118 20:44:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:48.118 20:44:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:48.118 20:44:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.118 20:44:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:48.118 20:44:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.118 20:44:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.118 20:44:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.118 20:44:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.118 20:44:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.378 20:44:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:48.378 20:44:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:48.378 20:44:39 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.313 Waiting for block devices as requested 00:03:49.313 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:49.573 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:49.573 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:49.833 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:49.833 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:49.833 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:49.833 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:50.093 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:50.093 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:50.093 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.093 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.352 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.352 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.352 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:50.352 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:50.610 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:50.610 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:50.870 20:44:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:50.870 20:44:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:50.870 20:44:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:50.870 20:44:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:50.870 20:44:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:50.870 20:44:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:50.870 20:44:41 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:50.870 20:44:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:50.870 20:44:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:50.870 20:44:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:50.870 20:44:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:50.870 20:44:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:50.870 20:44:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:50.870 20:44:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:50.870 20:44:41 -- common/autotest_common.sh@1543 -- # continue 00:03:50.870 20:44:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:50.870 20:44:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.870 20:44:41 -- common/autotest_common.sh@10 -- # set +x 00:03:50.870 20:44:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:50.870 20:44:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.870 20:44:41 -- common/autotest_common.sh@10 -- # set +x 00:03:50.870 20:44:41 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.251 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.251 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.251 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.190 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.190 20:44:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:53.190 20:44:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.190 20:44:43 -- common/autotest_common.sh@10 -- # set +x 00:03:53.190 20:44:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:53.190 20:44:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:53.190 20:44:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.190 20:44:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:53.190 20:44:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:53.190 20:44:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:53.190 20:44:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:53.190 20:44:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:53.190 20:44:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.190 20:44:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.190 20:44:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.190 20:44:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.190 20:44:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.190 20:44:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.190 20:44:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:53.190 20:44:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.190 20:44:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:53.190 20:44:44 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:53.190 20:44:44 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.190 20:44:44 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:53.190 20:44:44 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:53.190 20:44:44 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:53.190 20:44:44 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:53.190 20:44:44 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3841354 00:03:53.190 20:44:44 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.190 20:44:44 -- common/autotest_common.sh@1585 -- # waitforlisten 3841354 00:03:53.190 20:44:44 -- common/autotest_common.sh@835 -- # '[' -z 3841354 ']' 00:03:53.190 20:44:44 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.190 20:44:44 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.190 20:44:44 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.190 20:44:44 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.190 20:44:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.190 [2024-11-26 20:44:44.118135] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:03:53.190 [2024-11-26 20:44:44.118233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841354 ] 00:03:53.448 [2024-11-26 20:44:44.190309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.448 [2024-11-26 20:44:44.252716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.707 20:44:44 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.707 20:44:44 -- common/autotest_common.sh@868 -- # return 0 00:03:53.707 20:44:44 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:53.707 20:44:44 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:53.707 20:44:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:57.003 nvme0n1 00:03:57.003 20:44:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.003 [2024-11-26 20:44:47.895675] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:57.003 [2024-11-26 20:44:47.895743] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:57.003 request: 00:03:57.003 { 00:03:57.003 "nvme_ctrlr_name": "nvme0", 00:03:57.003 "password": "test", 00:03:57.003 "method": "bdev_nvme_opal_revert", 00:03:57.003 "req_id": 1 00:03:57.003 } 00:03:57.003 Got JSON-RPC error response 00:03:57.003 response: 00:03:57.003 { 00:03:57.003 "code": -32603, 00:03:57.003 "message": "Internal error" 00:03:57.003 } 00:03:57.003 20:44:47 -- common/autotest_common.sh@1591 -- # true 00:03:57.003 20:44:47 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:57.003 20:44:47 -- common/autotest_common.sh@1595 -- # killprocess 3841354 00:03:57.003 20:44:47 -- common/autotest_common.sh@954 -- # '[' -z 3841354 ']' 00:03:57.003 20:44:47 -- common/autotest_common.sh@958 -- # kill -0 3841354 00:03:57.003 20:44:47 -- common/autotest_common.sh@959 -- # uname 00:03:57.003 20:44:47 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.003 20:44:47 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841354 00:03:57.261 20:44:47 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.261 20:44:47 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.261 20:44:47 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841354' 00:03:57.261 killing process with pid 3841354 00:03:57.261 20:44:47 -- common/autotest_common.sh@973 -- # kill 3841354 00:03:57.261 20:44:47 -- common/autotest_common.sh@978 -- # wait 3841354 00:03:59.258 20:44:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:59.258 20:44:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:59.258 20:44:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.258 20:44:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.258 20:44:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:59.258 20:44:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.258 20:44:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.258 20:44:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:59.258 20:44:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.258 20:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.258 20:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.258 20:44:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.258 ************************************ 00:03:59.258 START TEST env 00:03:59.258 ************************************ 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.258 * Looking for test storage... 00:03:59.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.258 20:44:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.258 20:44:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.258 20:44:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.258 20:44:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.258 20:44:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.258 20:44:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.258 20:44:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.258 20:44:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.258 20:44:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.258 20:44:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.258 20:44:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.258 20:44:49 env -- scripts/common.sh@344 -- # case "$op" in 00:03:59.258 20:44:49 env -- scripts/common.sh@345 -- # : 1 00:03:59.258 20:44:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.258 20:44:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.258 20:44:49 env -- scripts/common.sh@365 -- # decimal 1 00:03:59.258 20:44:49 env -- scripts/common.sh@353 -- # local d=1 00:03:59.258 20:44:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.258 20:44:49 env -- scripts/common.sh@355 -- # echo 1 00:03:59.258 20:44:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.258 20:44:49 env -- scripts/common.sh@366 -- # decimal 2 00:03:59.258 20:44:49 env -- scripts/common.sh@353 -- # local d=2 00:03:59.258 20:44:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.258 20:44:49 env -- scripts/common.sh@355 -- # echo 2 00:03:59.258 20:44:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.258 20:44:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.258 20:44:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.258 20:44:49 env -- scripts/common.sh@368 -- # return 0 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.258 --rc genhtml_branch_coverage=1 00:03:59.258 --rc genhtml_function_coverage=1 00:03:59.258 --rc genhtml_legend=1 00:03:59.258 --rc geninfo_all_blocks=1 00:03:59.258 --rc geninfo_unexecuted_blocks=1 00:03:59.258 00:03:59.258 ' 00:03:59.258 20:44:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.258 --rc genhtml_branch_coverage=1 00:03:59.258 --rc genhtml_function_coverage=1 00:03:59.258 --rc genhtml_legend=1 00:03:59.258 --rc geninfo_all_blocks=1 00:03:59.259 --rc geninfo_unexecuted_blocks=1 00:03:59.259 00:03:59.259 ' 00:03:59.259 20:44:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.259 --rc genhtml_branch_coverage=1 00:03:59.259 --rc genhtml_function_coverage=1 00:03:59.259 --rc genhtml_legend=1 00:03:59.259 --rc geninfo_all_blocks=1 00:03:59.259 --rc geninfo_unexecuted_blocks=1 00:03:59.259 00:03:59.259 ' 00:03:59.259 20:44:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.259 --rc genhtml_branch_coverage=1 00:03:59.259 --rc genhtml_function_coverage=1 00:03:59.259 --rc genhtml_legend=1 00:03:59.259 --rc geninfo_all_blocks=1 00:03:59.259 --rc geninfo_unexecuted_blocks=1 00:03:59.259 00:03:59.259 ' 00:03:59.259 20:44:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.259 20:44:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.259 20:44:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.259 20:44:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.259 ************************************ 00:03:59.259 START TEST env_memory 00:03:59.259 ************************************ 00:03:59.259 20:44:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.259 00:03:59.259 00:03:59.259 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.259 http://cunit.sourceforge.net/ 00:03:59.259 00:03:59.259 00:03:59.259 Suite: memory 00:03:59.259 Test: alloc and free memory map ...[2024-11-26 20:44:49.981493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.259 passed 00:03:59.259 Test: mem map translation ...[2024-11-26 20:44:50.002060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.259 [2024-11-26 20:44:50.002083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.259 [2024-11-26 20:44:50.002138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.259 [2024-11-26 20:44:50.002150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.259 passed 00:03:59.259 Test: mem map registration ...[2024-11-26 20:44:50.048235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:59.259 [2024-11-26 20:44:50.048270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:59.259 passed 00:03:59.259 Test: mem map adjacent registrations ...passed 00:03:59.259 00:03:59.259 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.259 suites 1 1 n/a 0 0 00:03:59.259 tests 4 4 4 0 0 00:03:59.259 asserts 152 152 152 0 n/a 00:03:59.259 00:03:59.259 Elapsed time = 0.150 seconds 00:03:59.259 00:03:59.259 real 0m0.159s 00:03:59.259 user 0m0.148s 00:03:59.259 sys 0m0.010s 00:03:59.259 20:44:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.259 20:44:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:59.259 ************************************ 00:03:59.259 END TEST env_memory 00:03:59.259 ************************************ 00:03:59.259 20:44:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.259 20:44:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.259 20:44:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.259 20:44:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.259 ************************************ 00:03:59.259 START TEST env_vtophys 00:03:59.259 ************************************ 00:03:59.259 20:44:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.259 EAL: lib.eal log level changed from notice to debug 00:03:59.259 EAL: Detected lcore 0 as core 0 on socket 0 00:03:59.259 EAL: Detected lcore 1 as core 1 on socket 0 00:03:59.259 EAL: Detected lcore 2 as core 2 on socket 0 00:03:59.259 EAL: Detected lcore 3 as core 3 on socket 0 00:03:59.259 EAL: Detected lcore 4 as core 4 on socket 0 00:03:59.259 EAL: Detected lcore 5 as core 5 on socket 0 00:03:59.259 EAL: Detected lcore 6 as core 8 on socket 0 00:03:59.259 EAL: Detected lcore 7 as core 9 on socket 0 00:03:59.259 EAL: Detected lcore 8 as core 10 on socket 0 00:03:59.259 EAL: Detected lcore 9 as core 11 on socket 0 00:03:59.259 EAL: Detected lcore 10 as core 12 on socket 0 00:03:59.259 EAL: Detected lcore 11 as core 13 on socket 0 00:03:59.259 EAL: Detected lcore 12 as core 0 on socket 1 00:03:59.259 EAL: Detected lcore 13 as core 1 on socket 1 00:03:59.259 EAL: Detected lcore 14 as core 2 on socket 1 00:03:59.259 EAL: Detected lcore 15 as core 3 on socket 1 00:03:59.259 EAL: Detected lcore 16 as core 4 on socket 1 00:03:59.259 EAL: Detected lcore 17 as core 5 on socket 1 00:03:59.259 EAL: Detected lcore 18 as core 8 on socket 1 00:03:59.259 EAL: Detected lcore 19 as core 9 on socket 1 00:03:59.259 EAL: Detected lcore 20 as core 10 on socket 1 00:03:59.259 EAL: Detected lcore 21 as core 11 on socket 1 00:03:59.259 EAL: Detected lcore 22 as core 12 on socket 1 00:03:59.259 EAL: Detected lcore 23 as core 13 on socket 1 00:03:59.259 EAL: Detected lcore 24 as core 0 on socket 0 00:03:59.259 EAL: Detected lcore 25 as core 1 on socket 0 00:03:59.259 EAL: Detected lcore 26 as core 2 on socket 0 00:03:59.259 EAL: Detected lcore 27 as core 3 on socket 0 00:03:59.259 EAL: Detected lcore 28 as core 4 on socket 0 00:03:59.259 EAL: Detected lcore 29 as core 5 on socket 0 00:03:59.259 EAL: Detected lcore 30 as core 8 on socket 0 00:03:59.259 EAL: Detected lcore 31 as core 9 on socket 0 00:03:59.259 EAL: Detected lcore 32 as core 10 on socket 0 00:03:59.259 EAL: Detected lcore 33 as core 11 on socket 0 00:03:59.259 EAL: Detected lcore 34 as core 12 on socket 0 00:03:59.259 EAL: Detected lcore 35 as core 13 on socket 0 00:03:59.259 EAL: Detected lcore 36 as core 0 on socket 1 00:03:59.259 EAL: Detected lcore 37 as core 1 on socket 1 00:03:59.259 EAL: Detected lcore 38 as core 2 on socket 1 00:03:59.259 EAL: Detected lcore 39 as core 3 on socket 1 00:03:59.259 EAL: Detected lcore 40 as core 4 on socket 1 00:03:59.259 EAL: Detected lcore 41 as core 5 on socket 1 00:03:59.259 EAL: Detected lcore 42 as core 8 on socket 1 00:03:59.259 EAL: Detected lcore 43 as core 9 on socket 1 00:03:59.259 EAL: Detected lcore 44 as core 10 on socket 1 00:03:59.259 EAL: Detected lcore 45 as core 11 on socket 1 00:03:59.259 EAL: Detected lcore 46 as core 12 on socket 1 00:03:59.259 EAL: Detected lcore 47 as core 13 on socket 1 00:03:59.259 EAL: Maximum logical cores by configuration: 128 00:03:59.259 EAL: Detected CPU lcores: 48 00:03:59.259 EAL: Detected NUMA nodes: 2 00:03:59.259 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:59.259 EAL: Detected shared linkage of DPDK 00:03:59.259 EAL: No shared files mode enabled, IPC will be disabled 00:03:59.519 EAL: Bus pci wants IOVA as 'DC' 00:03:59.519 EAL: Buses did not request a specific IOVA mode. 00:03:59.519 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:59.519 EAL: Selected IOVA mode 'VA' 00:03:59.519 EAL: Probing VFIO support... 00:03:59.519 EAL: IOMMU type 1 (Type 1) is supported 00:03:59.519 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:59.519 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:59.519 EAL: VFIO support initialized 00:03:59.519 EAL: Ask a virtual area of 0x2e000 bytes 00:03:59.519 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:59.519 EAL: Setting up physically contiguous memory... 00:03:59.519 EAL: Setting maximum number of open files to 524288 00:03:59.519 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:59.519 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:59.519 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:59.519 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:59.519 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.519 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:59.519 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.519 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.519 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:59.519 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:59.519 EAL: Hugepages will be freed exactly as allocated. 00:03:59.519 EAL: No shared files mode enabled, IPC is disabled 00:03:59.519 EAL: No shared files mode enabled, IPC is disabled 00:03:59.519 EAL: TSC frequency is ~2700000 KHz 00:03:59.519 EAL: Main lcore 0 is ready (tid=7fb6feb02a00;cpuset=[0]) 00:03:59.519 EAL: Trying to obtain current memory policy. 00:03:59.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.519 EAL: Restoring previous memory policy: 0 00:03:59.519 EAL: request: mp_malloc_sync 00:03:59.519 EAL: No shared files mode enabled, IPC is disabled 00:03:59.519 EAL: Heap on socket 0 was expanded by 2MB 00:03:59.519 EAL: No shared files mode enabled, IPC is disabled 00:03:59.519 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:59.519 EAL: Mem event callback 'spdk:(nil)' registered 00:03:59.519 00:03:59.519 00:03:59.519 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.519 http://cunit.sourceforge.net/ 00:03:59.519 00:03:59.519 00:03:59.519 Suite: components_suite 00:03:59.519 Test: vtophys_malloc_test ...passed 00:03:59.520 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 4MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 4MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 6MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 6MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 10MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 10MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 18MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 18MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 34MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 34MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 66MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 66MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.520 EAL: Restoring previous memory policy: 4 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was expanded by 130MB 00:03:59.520 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.520 EAL: request: mp_malloc_sync 00:03:59.520 EAL: No shared files mode enabled, IPC is disabled 00:03:59.520 EAL: Heap on socket 0 was shrunk by 130MB 00:03:59.520 EAL: Trying to obtain current memory policy. 00:03:59.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.779 EAL: Restoring previous memory policy: 4 00:03:59.779 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.779 EAL: request: mp_malloc_sync 00:03:59.779 EAL: No shared files mode enabled, IPC is disabled 00:03:59.779 EAL: Heap on socket 0 was expanded by 258MB 00:03:59.779 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.779 EAL: request: mp_malloc_sync 00:03:59.779 EAL: No shared files mode enabled, IPC is disabled 00:03:59.779 EAL: Heap on socket 0 was shrunk by 258MB 00:03:59.779 EAL: Trying to obtain current memory policy. 00:03:59.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.039 EAL: Restoring previous memory policy: 4 00:04:00.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.039 EAL: request: mp_malloc_sync 00:04:00.039 EAL: No shared files mode enabled, IPC is disabled 00:04:00.039 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.039 EAL: request: mp_malloc_sync 00:04:00.039 EAL: No shared files mode enabled, IPC is disabled 00:04:00.039 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.039 EAL: Trying to obtain current memory policy. 00:04:00.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.610 EAL: Restoring previous memory policy: 4 00:04:00.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.610 EAL: request: mp_malloc_sync 00:04:00.610 EAL: No shared files mode enabled, IPC is disabled 00:04:00.610 EAL: Heap on socket 0 was expanded by 1026MB 00:04:00.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.869 EAL: request: mp_malloc_sync 00:04:00.869 EAL: No shared files mode enabled, IPC is disabled 00:04:00.869 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:00.869 passed 00:04:00.869 00:04:00.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.869 suites 1 1 n/a 0 0 00:04:00.869 tests 2 2 2 0 0 00:04:00.869 asserts 497 497 497 0 n/a 00:04:00.869 00:04:00.869 Elapsed time = 1.402 seconds 00:04:00.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.869 EAL: request: mp_malloc_sync 00:04:00.869 EAL: No shared files mode enabled, IPC is disabled 00:04:00.869 EAL: Heap on socket 0 was shrunk by 2MB 00:04:00.870 EAL: No shared files mode enabled, IPC is disabled 00:04:00.870 EAL: No shared files mode enabled, IPC is disabled 00:04:00.870 EAL: No shared files mode enabled, IPC is disabled 00:04:00.870 00:04:00.870 real 0m1.542s 00:04:00.870 user 0m0.893s 00:04:00.870 sys 0m0.605s 00:04:00.870 20:44:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.870 20:44:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.870 ************************************ 00:04:00.870 END TEST env_vtophys 00:04:00.870 ************************************ 00:04:00.870 20:44:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.870 20:44:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.870 20:44:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.870 20:44:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.870 ************************************ 00:04:00.870 START TEST env_pci 00:04:00.870 ************************************ 00:04:00.870 20:44:51 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.870 00:04:00.870 00:04:00.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.870 http://cunit.sourceforge.net/ 00:04:00.870 00:04:00.870 00:04:00.870 Suite: pci 00:04:00.870 Test: pci_hook ...[2024-11-26 20:44:51.762149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3842294 has claimed it 00:04:00.870 EAL: Cannot find device (10000:00:01.0) 00:04:00.870 EAL: Failed to attach device on primary process 00:04:00.870 passed 00:04:00.870 00:04:00.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.870 suites 1 1 n/a 0 0 00:04:00.870 tests 1 1 1 0 0 00:04:00.870 asserts 25 25 25 0 n/a 00:04:00.870 00:04:00.870 Elapsed time = 0.022 seconds 00:04:00.870 00:04:00.870 real 0m0.035s 00:04:00.870 user 0m0.008s 00:04:00.870 sys 0m0.027s 00:04:00.870 20:44:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.870 20:44:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:00.870 ************************************ 00:04:00.870 END TEST env_pci 00:04:00.870 ************************************ 00:04:01.129 20:44:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.129 20:44:51 env -- env/env.sh@15 -- # uname 00:04:01.129 20:44:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.129 20:44:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.129 20:44:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.129 20:44:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:01.129 20:44:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.129 20:44:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.129 ************************************ 00:04:01.129 START TEST env_dpdk_post_init 00:04:01.129 ************************************ 00:04:01.129 20:44:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.129 EAL: Detected CPU lcores: 48 00:04:01.129 EAL: Detected NUMA nodes: 2 00:04:01.129 EAL: Detected shared linkage of DPDK 00:04:01.129 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.129 EAL: Selected IOVA mode 'VA' 00:04:01.129 EAL: VFIO support initialized 00:04:01.129 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.129 EAL: Using IOMMU type 1 (Type 1) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:01.129 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:01.388 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:01.389 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:02.327 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:05.620 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:05.620 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:05.620 Starting DPDK initialization... 00:04:05.620 Starting SPDK post initialization... 00:04:05.620 SPDK NVMe probe 00:04:05.620 Attaching to 0000:88:00.0 00:04:05.620 Attached to 0000:88:00.0 00:04:05.620 Cleaning up... 00:04:05.620 00:04:05.620 real 0m4.394s 00:04:05.620 user 0m3.000s 00:04:05.620 sys 0m0.449s 00:04:05.620 20:44:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.620 20:44:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.620 ************************************ 00:04:05.620 END TEST env_dpdk_post_init 00:04:05.620 ************************************ 00:04:05.620 20:44:56 env -- env/env.sh@26 -- # uname 00:04:05.620 20:44:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.621 20:44:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.621 20:44:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.621 20:44:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.621 20:44:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.621 ************************************ 00:04:05.621 START TEST env_mem_callbacks 00:04:05.621 ************************************ 00:04:05.621 20:44:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.621 EAL: Detected CPU lcores: 48 00:04:05.621 EAL: Detected NUMA nodes: 2 00:04:05.621 EAL: Detected shared linkage of DPDK 00:04:05.621 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.621 EAL: Selected IOVA mode 'VA' 00:04:05.621 EAL: VFIO support initialized 00:04:05.621 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.621 00:04:05.621 00:04:05.621 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.621 http://cunit.sourceforge.net/ 00:04:05.621 00:04:05.621 00:04:05.621 Suite: memory 00:04:05.621 Test: test ... 00:04:05.621 register 0x200000200000 2097152 00:04:05.621 malloc 3145728 00:04:05.621 register 0x200000400000 4194304 00:04:05.621 buf 0x200000500000 len 3145728 PASSED 00:04:05.621 malloc 64 00:04:05.621 buf 0x2000004fff40 len 64 PASSED 00:04:05.621 malloc 4194304 00:04:05.621 register 0x200000800000 6291456 00:04:05.621 buf 0x200000a00000 len 4194304 PASSED 00:04:05.621 free 0x200000500000 3145728 00:04:05.621 free 0x2000004fff40 64 00:04:05.621 unregister 0x200000400000 4194304 PASSED 00:04:05.621 free 0x200000a00000 4194304 00:04:05.621 unregister 0x200000800000 6291456 PASSED 00:04:05.621 malloc 8388608 00:04:05.621 register 0x200000400000 10485760 00:04:05.621 buf 0x200000600000 len 8388608 PASSED 00:04:05.621 free 0x200000600000 8388608 00:04:05.621 unregister 0x200000400000 10485760 PASSED 00:04:05.621 passed 00:04:05.621 00:04:05.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.621 suites 1 1 n/a 0 0 00:04:05.621 tests 1 1 1 0 0 00:04:05.621 asserts 15 15 15 0 n/a 00:04:05.621 00:04:05.621 Elapsed time = 0.005 seconds 00:04:05.621 00:04:05.621 real 0m0.051s 00:04:05.621 user 0m0.016s 00:04:05.621 sys 0m0.035s 00:04:05.621 20:44:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.621 20:44:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.621 ************************************ 00:04:05.621 END TEST env_mem_callbacks 00:04:05.621 ************************************ 00:04:05.621 00:04:05.621 real 0m6.577s 00:04:05.621 user 0m4.257s 00:04:05.621 sys 0m1.353s 00:04:05.621 20:44:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.621 20:44:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.621 ************************************ 00:04:05.621 END TEST env 00:04:05.621 ************************************ 00:04:05.621 20:44:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.621 20:44:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.621 20:44:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.621 20:44:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.621 ************************************ 00:04:05.621 START TEST rpc 00:04:05.621 ************************************ 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.621 * Looking for test storage... 00:04:05.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.621 20:44:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.621 20:44:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.621 20:44:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.621 20:44:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.621 20:44:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.621 20:44:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.621 20:44:56 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.621 20:44:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.621 20:44:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.621 20:44:56 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.621 20:44:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.621 20:44:56 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.621 20:44:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.621 20:44:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.621 20:44:56 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.621 --rc genhtml_branch_coverage=1 00:04:05.621 --rc genhtml_function_coverage=1 00:04:05.621 --rc genhtml_legend=1 00:04:05.621 --rc geninfo_all_blocks=1 00:04:05.621 --rc geninfo_unexecuted_blocks=1 00:04:05.621 00:04:05.621 ' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.621 --rc genhtml_branch_coverage=1 00:04:05.621 --rc genhtml_function_coverage=1 00:04:05.621 --rc genhtml_legend=1 00:04:05.621 --rc geninfo_all_blocks=1 00:04:05.621 --rc geninfo_unexecuted_blocks=1 00:04:05.621 00:04:05.621 ' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.621 --rc genhtml_branch_coverage=1 00:04:05.621 --rc genhtml_function_coverage=1 00:04:05.621 --rc genhtml_legend=1 00:04:05.621 --rc geninfo_all_blocks=1 00:04:05.621 --rc geninfo_unexecuted_blocks=1 00:04:05.621 00:04:05.621 ' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.621 --rc genhtml_branch_coverage=1 00:04:05.621 --rc genhtml_function_coverage=1 00:04:05.621 --rc genhtml_legend=1 00:04:05.621 --rc geninfo_all_blocks=1 00:04:05.621 --rc geninfo_unexecuted_blocks=1 00:04:05.621 00:04:05.621 ' 00:04:05.621 20:44:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3843041 00:04:05.621 20:44:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:05.621 20:44:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.621 20:44:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3843041 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 3843041 ']' 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.621 20:44:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.880 [2024-11-26 20:44:56.601424] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:05.880 [2024-11-26 20:44:56.601520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843041 ] 00:04:05.880 [2024-11-26 20:44:56.666629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.880 [2024-11-26 20:44:56.725696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.880 [2024-11-26 20:44:56.725765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3843041' to capture a snapshot of events at runtime. 00:04:05.880 [2024-11-26 20:44:56.725791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.880 [2024-11-26 20:44:56.725805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.880 [2024-11-26 20:44:56.725817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3843041 for offline analysis/debug. 00:04:05.880 [2024-11-26 20:44:56.726480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.138 20:44:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.138 20:44:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:06.138 20:44:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.138 20:44:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.138 20:44:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.138 20:44:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.138 20:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.138 20:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.138 20:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.138 ************************************ 00:04:06.138 START TEST rpc_integrity 00:04:06.138 ************************************ 00:04:06.138 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.138 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.138 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.138 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.138 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.138 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.138 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.397 { 00:04:06.397 "name": "Malloc0", 00:04:06.397 "aliases": [ 00:04:06.397 "f7df8e88-cc52-4b0d-99e2-848583a8e1f9" 00:04:06.397 ], 00:04:06.397 "product_name": "Malloc disk", 00:04:06.397 "block_size": 512, 00:04:06.397 "num_blocks": 16384, 00:04:06.397 "uuid": "f7df8e88-cc52-4b0d-99e2-848583a8e1f9", 00:04:06.397 "assigned_rate_limits": { 00:04:06.397 "rw_ios_per_sec": 0, 00:04:06.397 "rw_mbytes_per_sec": 0, 00:04:06.397 "r_mbytes_per_sec": 0, 00:04:06.397 "w_mbytes_per_sec": 0 00:04:06.397 }, 00:04:06.397 "claimed": false, 00:04:06.397 "zoned": false, 00:04:06.397 "supported_io_types": { 00:04:06.397 "read": true, 00:04:06.397 "write": true, 00:04:06.397 "unmap": true, 00:04:06.397 "flush": true, 00:04:06.397 "reset": true, 00:04:06.397 "nvme_admin": false, 00:04:06.397 "nvme_io": false, 00:04:06.397 "nvme_io_md": false, 00:04:06.397 "write_zeroes": true, 00:04:06.397 "zcopy": true, 00:04:06.397 "get_zone_info": false, 00:04:06.397 "zone_management": false, 00:04:06.397 "zone_append": false, 00:04:06.397 "compare": false, 00:04:06.397 "compare_and_write": false, 00:04:06.397 "abort": true, 00:04:06.397 "seek_hole": false, 00:04:06.397 "seek_data": false, 00:04:06.397 "copy": true, 00:04:06.397 "nvme_iov_md": false 00:04:06.397 }, 00:04:06.397 "memory_domains": [ 00:04:06.397 { 00:04:06.397 "dma_device_id": "system", 00:04:06.397 "dma_device_type": 1 00:04:06.397 }, 00:04:06.397 { 00:04:06.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.397 "dma_device_type": 2 00:04:06.397 } 00:04:06.397 ], 00:04:06.397 "driver_specific": {} 00:04:06.397 } 00:04:06.397 ]' 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.397 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.397 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.397 [2024-11-26 20:44:57.153041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.397 [2024-11-26 20:44:57.153092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.397 [2024-11-26 20:44:57.153117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16aad20 00:04:06.397 [2024-11-26 20:44:57.153133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.397 [2024-11-26 20:44:57.154664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.397 [2024-11-26 20:44:57.154703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.397 Passthru0 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.398 { 00:04:06.398 "name": "Malloc0", 00:04:06.398 "aliases": [ 00:04:06.398 "f7df8e88-cc52-4b0d-99e2-848583a8e1f9" 00:04:06.398 ], 00:04:06.398 "product_name": "Malloc disk", 00:04:06.398 "block_size": 512, 00:04:06.398 "num_blocks": 16384, 00:04:06.398 "uuid": "f7df8e88-cc52-4b0d-99e2-848583a8e1f9", 00:04:06.398 "assigned_rate_limits": { 00:04:06.398 "rw_ios_per_sec": 0, 00:04:06.398 "rw_mbytes_per_sec": 0, 00:04:06.398 "r_mbytes_per_sec": 0, 00:04:06.398 "w_mbytes_per_sec": 0 00:04:06.398 }, 00:04:06.398 "claimed": true, 00:04:06.398 "claim_type": "exclusive_write", 00:04:06.398 "zoned": false, 00:04:06.398 "supported_io_types": { 00:04:06.398 "read": true, 00:04:06.398 "write": true, 00:04:06.398 "unmap": true, 00:04:06.398 "flush": true, 00:04:06.398 "reset": true, 00:04:06.398 "nvme_admin": false, 00:04:06.398 "nvme_io": false, 00:04:06.398 "nvme_io_md": false, 00:04:06.398 "write_zeroes": true, 00:04:06.398 "zcopy": true, 00:04:06.398 "get_zone_info": false, 00:04:06.398 "zone_management": false, 00:04:06.398 "zone_append": false, 00:04:06.398 "compare": false, 00:04:06.398 "compare_and_write": false, 00:04:06.398 "abort": true, 00:04:06.398 "seek_hole": false, 00:04:06.398 "seek_data": false, 00:04:06.398 "copy": true, 00:04:06.398 "nvme_iov_md": false 00:04:06.398 }, 00:04:06.398 "memory_domains": [ 00:04:06.398 { 00:04:06.398 "dma_device_id": "system", 00:04:06.398 "dma_device_type": 1 00:04:06.398 }, 00:04:06.398 { 00:04:06.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.398 "dma_device_type": 2 00:04:06.398 } 00:04:06.398 ], 00:04:06.398 "driver_specific": {} 00:04:06.398 }, 00:04:06.398 { 00:04:06.398 "name": "Passthru0", 00:04:06.398 "aliases": [ 00:04:06.398 "1ed47426-4faa-5dda-a028-9d2cd36fc2e2" 00:04:06.398 ], 00:04:06.398 "product_name": "passthru", 00:04:06.398 "block_size": 512, 00:04:06.398 "num_blocks": 16384, 00:04:06.398 "uuid": "1ed47426-4faa-5dda-a028-9d2cd36fc2e2", 00:04:06.398 "assigned_rate_limits": { 00:04:06.398 "rw_ios_per_sec": 0, 00:04:06.398 "rw_mbytes_per_sec": 0, 00:04:06.398 "r_mbytes_per_sec": 0, 00:04:06.398 "w_mbytes_per_sec": 0 00:04:06.398 }, 00:04:06.398 "claimed": false, 00:04:06.398 "zoned": false, 00:04:06.398 "supported_io_types": { 00:04:06.398 "read": true, 00:04:06.398 "write": true, 00:04:06.398 "unmap": true, 00:04:06.398 "flush": true, 00:04:06.398 "reset": true, 00:04:06.398 "nvme_admin": false, 00:04:06.398 "nvme_io": false, 00:04:06.398 "nvme_io_md": false, 00:04:06.398 "write_zeroes": true, 00:04:06.398 "zcopy": true, 00:04:06.398 "get_zone_info": false, 00:04:06.398 "zone_management": false, 00:04:06.398 "zone_append": false, 00:04:06.398 "compare": false, 00:04:06.398 "compare_and_write": false, 00:04:06.398 "abort": true, 00:04:06.398 "seek_hole": false, 00:04:06.398 "seek_data": false, 00:04:06.398 "copy": true, 00:04:06.398 "nvme_iov_md": false 00:04:06.398 }, 00:04:06.398 "memory_domains": [ 00:04:06.398 { 00:04:06.398 "dma_device_id": "system", 00:04:06.398 "dma_device_type": 1 00:04:06.398 }, 00:04:06.398 { 00:04:06.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.398 "dma_device_type": 2 00:04:06.398 } 00:04:06.398 ], 00:04:06.398 "driver_specific": { 00:04:06.398 "passthru": { 00:04:06.398 "name": "Passthru0", 00:04:06.398 "base_bdev_name": "Malloc0" 00:04:06.398 } 00:04:06.398 } 00:04:06.398 } 00:04:06.398 ]' 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.398 20:44:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.398 00:04:06.398 real 0m0.236s 00:04:06.398 user 0m0.152s 00:04:06.398 sys 0m0.024s 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 ************************************ 00:04:06.398 END TEST rpc_integrity 00:04:06.398 ************************************ 00:04:06.398 20:44:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.398 20:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.398 20:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.398 20:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.398 ************************************ 00:04:06.398 START TEST rpc_plugins 00:04:06.398 ************************************ 00:04:06.398 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:06.398 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.398 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.398 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.658 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.658 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.658 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.658 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.658 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.658 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.658 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.658 { 00:04:06.658 "name": "Malloc1", 00:04:06.658 "aliases": [ 00:04:06.658 "e60dd251-6e7c-401a-9c49-319f5bb75fc7" 00:04:06.658 ], 00:04:06.658 "product_name": "Malloc disk", 00:04:06.658 "block_size": 4096, 00:04:06.658 "num_blocks": 256, 00:04:06.658 "uuid": "e60dd251-6e7c-401a-9c49-319f5bb75fc7", 00:04:06.658 "assigned_rate_limits": { 00:04:06.658 "rw_ios_per_sec": 0, 00:04:06.658 "rw_mbytes_per_sec": 0, 00:04:06.658 "r_mbytes_per_sec": 0, 00:04:06.658 "w_mbytes_per_sec": 0 00:04:06.658 }, 00:04:06.658 "claimed": false, 00:04:06.658 "zoned": false, 00:04:06.658 "supported_io_types": { 00:04:06.658 "read": true, 00:04:06.658 "write": true, 00:04:06.658 "unmap": true, 00:04:06.658 "flush": true, 00:04:06.658 "reset": true, 00:04:06.658 "nvme_admin": false, 00:04:06.658 "nvme_io": false, 00:04:06.658 "nvme_io_md": false, 00:04:06.658 "write_zeroes": true, 00:04:06.658 "zcopy": true, 00:04:06.658 "get_zone_info": false, 00:04:06.658 "zone_management": false, 00:04:06.658 "zone_append": false, 00:04:06.658 "compare": false, 00:04:06.658 "compare_and_write": false, 00:04:06.658 "abort": true, 00:04:06.658 "seek_hole": false, 00:04:06.658 "seek_data": false, 00:04:06.658 "copy": true, 00:04:06.658 "nvme_iov_md": false 00:04:06.658 }, 00:04:06.658 "memory_domains": [ 00:04:06.658 { 00:04:06.658 "dma_device_id": "system", 00:04:06.658 "dma_device_type": 1 00:04:06.658 }, 00:04:06.658 { 00:04:06.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.658 "dma_device_type": 2 00:04:06.658 } 00:04:06.658 ], 00:04:06.658 "driver_specific": {} 00:04:06.659 } 00:04:06.659 ]' 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.659 20:44:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.659 00:04:06.659 real 0m0.123s 00:04:06.659 user 0m0.072s 00:04:06.659 sys 0m0.015s 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.659 20:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.659 ************************************ 00:04:06.659 END TEST rpc_plugins 00:04:06.659 ************************************ 00:04:06.659 20:44:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.659 20:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.659 20:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.659 20:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.659 ************************************ 00:04:06.659 START TEST rpc_trace_cmd_test 00:04:06.659 ************************************ 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.659 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3843041", 00:04:06.659 "tpoint_group_mask": "0x8", 00:04:06.659 "iscsi_conn": { 00:04:06.659 "mask": "0x2", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "scsi": { 00:04:06.659 "mask": "0x4", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "bdev": { 00:04:06.659 "mask": "0x8", 00:04:06.659 "tpoint_mask": "0xffffffffffffffff" 00:04:06.659 }, 00:04:06.659 "nvmf_rdma": { 00:04:06.659 "mask": "0x10", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "nvmf_tcp": { 00:04:06.659 "mask": "0x20", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "ftl": { 00:04:06.659 "mask": "0x40", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "blobfs": { 00:04:06.659 "mask": "0x80", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "dsa": { 00:04:06.659 "mask": "0x200", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "thread": { 00:04:06.659 "mask": "0x400", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "nvme_pcie": { 00:04:06.659 "mask": "0x800", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "iaa": { 00:04:06.659 "mask": "0x1000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "nvme_tcp": { 00:04:06.659 "mask": "0x2000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "bdev_nvme": { 00:04:06.659 "mask": "0x4000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "sock": { 00:04:06.659 "mask": "0x8000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "blob": { 00:04:06.659 "mask": "0x10000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "bdev_raid": { 00:04:06.659 "mask": "0x20000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 }, 00:04:06.659 "scheduler": { 00:04:06.659 "mask": "0x40000", 00:04:06.659 "tpoint_mask": "0x0" 00:04:06.659 } 00:04:06.659 }' 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.659 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:06.919 00:04:06.919 real 0m0.195s 00:04:06.919 user 0m0.174s 00:04:06.919 sys 0m0.013s 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.919 20:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.919 ************************************ 00:04:06.919 END TEST rpc_trace_cmd_test 00:04:06.919 ************************************ 00:04:06.919 20:44:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:06.919 20:44:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:06.919 20:44:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:06.919 20:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.919 20:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.919 20:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.919 ************************************ 00:04:06.919 START TEST rpc_daemon_integrity 00:04:06.919 ************************************ 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.919 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.919 { 00:04:06.919 "name": "Malloc2", 00:04:06.919 "aliases": [ 00:04:06.919 "ddb05128-9877-46eb-a1ef-a59e52e95af8" 00:04:06.919 ], 00:04:06.919 "product_name": "Malloc disk", 00:04:06.919 "block_size": 512, 00:04:06.919 "num_blocks": 16384, 00:04:06.919 "uuid": "ddb05128-9877-46eb-a1ef-a59e52e95af8", 00:04:06.919 "assigned_rate_limits": { 00:04:06.919 "rw_ios_per_sec": 0, 00:04:06.919 "rw_mbytes_per_sec": 0, 00:04:06.919 "r_mbytes_per_sec": 0, 00:04:06.919 "w_mbytes_per_sec": 0 00:04:06.919 }, 00:04:06.919 "claimed": false, 00:04:06.919 "zoned": false, 00:04:06.920 "supported_io_types": { 00:04:06.920 "read": true, 00:04:06.920 "write": true, 00:04:06.920 "unmap": true, 00:04:06.920 "flush": true, 00:04:06.920 "reset": true, 00:04:06.920 "nvme_admin": false, 00:04:06.920 "nvme_io": false, 00:04:06.920 "nvme_io_md": false, 00:04:06.920 "write_zeroes": true, 00:04:06.920 "zcopy": true, 00:04:06.920 "get_zone_info": false, 00:04:06.920 "zone_management": false, 00:04:06.920 "zone_append": false, 00:04:06.920 "compare": false, 00:04:06.920 "compare_and_write": false, 00:04:06.920 "abort": true, 00:04:06.920 "seek_hole": false, 00:04:06.920 "seek_data": false, 00:04:06.920 "copy": true, 00:04:06.920 "nvme_iov_md": false 00:04:06.920 }, 00:04:06.920 "memory_domains": [ 00:04:06.920 { 00:04:06.920 "dma_device_id": "system", 00:04:06.920 "dma_device_type": 1 00:04:06.920 }, 00:04:06.920 { 00:04:06.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.920 "dma_device_type": 2 00:04:06.920 } 00:04:06.920 ], 00:04:06.920 "driver_specific": {} 00:04:06.920 } 00:04:06.920 ]' 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.920 [2024-11-26 20:44:57.843451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:06.920 [2024-11-26 20:44:57.843502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.920 [2024-11-26 20:44:57.843531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1566fc0 00:04:06.920 [2024-11-26 20:44:57.843546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.920 [2024-11-26 20:44:57.844907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.920 [2024-11-26 20:44:57.844935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.920 Passthru0 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.920 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.179 { 00:04:07.179 "name": "Malloc2", 00:04:07.179 "aliases": [ 00:04:07.179 "ddb05128-9877-46eb-a1ef-a59e52e95af8" 00:04:07.179 ], 00:04:07.179 "product_name": "Malloc disk", 00:04:07.179 "block_size": 512, 00:04:07.179 "num_blocks": 16384, 00:04:07.179 "uuid": "ddb05128-9877-46eb-a1ef-a59e52e95af8", 00:04:07.179 "assigned_rate_limits": { 00:04:07.179 "rw_ios_per_sec": 0, 00:04:07.179 "rw_mbytes_per_sec": 0, 00:04:07.179 "r_mbytes_per_sec": 0, 00:04:07.179 "w_mbytes_per_sec": 0 00:04:07.179 }, 00:04:07.179 "claimed": true, 00:04:07.179 "claim_type": "exclusive_write", 00:04:07.179 "zoned": false, 00:04:07.179 "supported_io_types": { 00:04:07.179 "read": true, 00:04:07.179 "write": true, 00:04:07.179 "unmap": true, 00:04:07.179 "flush": true, 00:04:07.179 "reset": true, 00:04:07.179 "nvme_admin": false, 00:04:07.179 "nvme_io": false, 00:04:07.179 "nvme_io_md": false, 00:04:07.179 "write_zeroes": true, 00:04:07.179 "zcopy": true, 00:04:07.179 "get_zone_info": false, 00:04:07.179 "zone_management": false, 00:04:07.179 "zone_append": false, 00:04:07.179 "compare": false, 00:04:07.179 "compare_and_write": false, 00:04:07.179 "abort": true, 00:04:07.179 "seek_hole": false, 00:04:07.179 "seek_data": false, 00:04:07.179 "copy": true, 00:04:07.179 "nvme_iov_md": false 00:04:07.179 }, 00:04:07.179 "memory_domains": [ 00:04:07.179 { 00:04:07.179 "dma_device_id": "system", 00:04:07.179 "dma_device_type": 1 00:04:07.179 }, 00:04:07.179 { 00:04:07.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.179 "dma_device_type": 2 00:04:07.179 } 00:04:07.179 ], 00:04:07.179 "driver_specific": {} 00:04:07.179 }, 00:04:07.179 { 00:04:07.179 "name": "Passthru0", 00:04:07.179 "aliases": [ 00:04:07.179 "ecc9ef35-2b34-5b2b-89df-c85da9442eb6" 00:04:07.179 ], 00:04:07.179 "product_name": "passthru", 00:04:07.179 "block_size": 512, 00:04:07.179 "num_blocks": 16384, 00:04:07.179 "uuid": "ecc9ef35-2b34-5b2b-89df-c85da9442eb6", 00:04:07.179 "assigned_rate_limits": { 00:04:07.179 "rw_ios_per_sec": 0, 00:04:07.179 "rw_mbytes_per_sec": 0, 00:04:07.179 "r_mbytes_per_sec": 0, 00:04:07.179 "w_mbytes_per_sec": 0 00:04:07.179 }, 00:04:07.179 "claimed": false, 00:04:07.179 "zoned": false, 00:04:07.179 "supported_io_types": { 00:04:07.179 "read": true, 00:04:07.179 "write": true, 00:04:07.179 "unmap": true, 00:04:07.179 "flush": true, 00:04:07.179 "reset": true, 00:04:07.179 "nvme_admin": false, 00:04:07.179 "nvme_io": false, 00:04:07.179 "nvme_io_md": false, 00:04:07.179 "write_zeroes": true, 00:04:07.179 "zcopy": true, 00:04:07.179 "get_zone_info": false, 00:04:07.179 "zone_management": false, 00:04:07.179 "zone_append": false, 00:04:07.179 "compare": false, 00:04:07.179 "compare_and_write": false, 00:04:07.179 "abort": true, 00:04:07.179 "seek_hole": false, 00:04:07.179 "seek_data": false, 00:04:07.179 "copy": true, 00:04:07.179 "nvme_iov_md": false 00:04:07.179 }, 00:04:07.179 "memory_domains": [ 00:04:07.179 { 00:04:07.179 "dma_device_id": "system", 00:04:07.179 "dma_device_type": 1 00:04:07.179 }, 00:04:07.179 { 00:04:07.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.179 "dma_device_type": 2 00:04:07.179 } 00:04:07.179 ], 00:04:07.179 "driver_specific": { 00:04:07.179 "passthru": { 00:04:07.179 "name": "Passthru0", 00:04:07.179 "base_bdev_name": "Malloc2" 00:04:07.179 } 00:04:07.179 } 00:04:07.179 } 00:04:07.179 ]' 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.179 00:04:07.179 real 0m0.230s 00:04:07.179 user 0m0.162s 00:04:07.179 sys 0m0.013s 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.179 20:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.179 ************************************ 00:04:07.179 END TEST rpc_daemon_integrity 00:04:07.179 ************************************ 00:04:07.179 20:44:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.179 20:44:57 rpc -- rpc/rpc.sh@84 -- # killprocess 3843041 00:04:07.179 20:44:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 3843041 ']' 00:04:07.179 20:44:57 rpc -- common/autotest_common.sh@958 -- # kill -0 3843041 00:04:07.179 20:44:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.179 20:44:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.179 20:44:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843041 00:04:07.179 20:44:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.179 20:44:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.179 20:44:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843041' 00:04:07.179 killing process with pid 3843041 00:04:07.179 20:44:58 rpc -- common/autotest_common.sh@973 -- # kill 3843041 00:04:07.179 20:44:58 rpc -- common/autotest_common.sh@978 -- # wait 3843041 00:04:07.745 00:04:07.745 real 0m2.062s 00:04:07.745 user 0m2.559s 00:04:07.745 sys 0m0.631s 00:04:07.745 20:44:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.745 20:44:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.745 ************************************ 00:04:07.745 END TEST rpc 00:04:07.745 ************************************ 00:04:07.745 20:44:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.745 20:44:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.745 20:44:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.745 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:07.745 ************************************ 00:04:07.745 START TEST skip_rpc 00:04:07.745 ************************************ 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.745 * Looking for test storage... 00:04:07.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.745 20:44:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.745 --rc genhtml_branch_coverage=1 00:04:07.745 --rc genhtml_function_coverage=1 00:04:07.745 --rc genhtml_legend=1 00:04:07.745 --rc geninfo_all_blocks=1 00:04:07.745 --rc geninfo_unexecuted_blocks=1 00:04:07.745 00:04:07.745 ' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.745 --rc genhtml_branch_coverage=1 00:04:07.745 --rc genhtml_function_coverage=1 00:04:07.745 --rc genhtml_legend=1 00:04:07.745 --rc geninfo_all_blocks=1 00:04:07.745 --rc geninfo_unexecuted_blocks=1 00:04:07.745 00:04:07.745 ' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.745 --rc genhtml_branch_coverage=1 00:04:07.745 --rc genhtml_function_coverage=1 00:04:07.745 --rc genhtml_legend=1 00:04:07.745 --rc geninfo_all_blocks=1 00:04:07.745 --rc geninfo_unexecuted_blocks=1 00:04:07.745 00:04:07.745 ' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.745 --rc genhtml_branch_coverage=1 00:04:07.745 --rc genhtml_function_coverage=1 00:04:07.745 --rc genhtml_legend=1 00:04:07.745 --rc geninfo_all_blocks=1 00:04:07.745 --rc geninfo_unexecuted_blocks=1 00:04:07.745 00:04:07.745 ' 00:04:07.745 20:44:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.745 20:44:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:07.745 20:44:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.745 20:44:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.745 ************************************ 00:04:07.745 START TEST skip_rpc 00:04:07.745 ************************************ 00:04:07.745 20:44:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:07.745 20:44:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3843486 00:04:07.745 20:44:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:07.745 20:44:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.745 20:44:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.005 [2024-11-26 20:44:58.724594] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:08.005 [2024-11-26 20:44:58.724684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843486 ] 00:04:08.005 [2024-11-26 20:44:58.795824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.005 [2024-11-26 20:44:58.859171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3843486 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3843486 ']' 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3843486 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843486 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843486' 00:04:13.277 killing process with pid 3843486 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3843486 00:04:13.277 20:45:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3843486 00:04:13.277 00:04:13.277 real 0m5.495s 00:04:13.277 user 0m5.172s 00:04:13.277 sys 0m0.336s 00:04:13.277 20:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.277 20:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.277 ************************************ 00:04:13.277 END TEST skip_rpc 00:04:13.277 ************************************ 00:04:13.277 20:45:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.277 20:45:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.277 20:45:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.277 20:45:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.537 ************************************ 00:04:13.537 START TEST skip_rpc_with_json 00:04:13.537 ************************************ 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3844235 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3844235 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3844235 ']' 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.537 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.537 [2024-11-26 20:45:04.272901] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:13.537 [2024-11-26 20:45:04.272988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844235 ] 00:04:13.537 [2024-11-26 20:45:04.338903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.537 [2024-11-26 20:45:04.398514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.797 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.797 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:13.797 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:13.797 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.797 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.798 [2024-11-26 20:45:04.675702] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:13.798 request: 00:04:13.798 { 00:04:13.798 "trtype": "tcp", 00:04:13.798 "method": "nvmf_get_transports", 00:04:13.798 "req_id": 1 00:04:13.798 } 00:04:13.798 Got JSON-RPC error response 00:04:13.798 response: 00:04:13.798 { 00:04:13.798 "code": -19, 00:04:13.798 "message": "No such device" 00:04:13.798 } 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.798 [2024-11-26 20:45:04.683835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.798 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.057 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.057 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.057 { 00:04:14.057 "subsystems": [ 00:04:14.057 { 00:04:14.057 "subsystem": "fsdev", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "fsdev_set_opts", 00:04:14.057 "params": { 00:04:14.057 "fsdev_io_pool_size": 65535, 00:04:14.057 "fsdev_io_cache_size": 256 00:04:14.057 } 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "vfio_user_target", 00:04:14.057 "config": null 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "keyring", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "iobuf", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "iobuf_set_options", 00:04:14.057 "params": { 00:04:14.057 "small_pool_count": 8192, 00:04:14.057 "large_pool_count": 1024, 00:04:14.057 "small_bufsize": 8192, 00:04:14.057 "large_bufsize": 135168, 00:04:14.057 "enable_numa": false 00:04:14.057 } 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "sock", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "sock_set_default_impl", 00:04:14.057 "params": { 00:04:14.057 "impl_name": "posix" 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "sock_impl_set_options", 00:04:14.057 "params": { 00:04:14.057 "impl_name": "ssl", 00:04:14.057 "recv_buf_size": 4096, 00:04:14.057 "send_buf_size": 4096, 00:04:14.057 "enable_recv_pipe": true, 00:04:14.057 "enable_quickack": false, 00:04:14.057 "enable_placement_id": 0, 00:04:14.057 "enable_zerocopy_send_server": true, 00:04:14.057 "enable_zerocopy_send_client": false, 00:04:14.057 "zerocopy_threshold": 0, 00:04:14.057 "tls_version": 0, 00:04:14.057 "enable_ktls": false 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "sock_impl_set_options", 00:04:14.057 "params": { 00:04:14.057 "impl_name": "posix", 00:04:14.057 "recv_buf_size": 2097152, 00:04:14.057 "send_buf_size": 2097152, 00:04:14.057 "enable_recv_pipe": true, 00:04:14.057 "enable_quickack": false, 00:04:14.057 "enable_placement_id": 0, 00:04:14.057 "enable_zerocopy_send_server": true, 00:04:14.057 "enable_zerocopy_send_client": false, 00:04:14.057 "zerocopy_threshold": 0, 00:04:14.057 "tls_version": 0, 00:04:14.057 "enable_ktls": false 00:04:14.057 } 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "vmd", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "accel", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "accel_set_options", 00:04:14.057 "params": { 00:04:14.057 "small_cache_size": 128, 00:04:14.057 "large_cache_size": 16, 00:04:14.057 "task_count": 2048, 00:04:14.057 "sequence_count": 2048, 00:04:14.057 "buf_count": 2048 00:04:14.057 } 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "bdev", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "bdev_set_options", 00:04:14.057 "params": { 00:04:14.057 "bdev_io_pool_size": 65535, 00:04:14.057 "bdev_io_cache_size": 256, 00:04:14.057 "bdev_auto_examine": true, 00:04:14.057 "iobuf_small_cache_size": 128, 00:04:14.057 "iobuf_large_cache_size": 16 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "bdev_raid_set_options", 00:04:14.057 "params": { 00:04:14.057 "process_window_size_kb": 1024, 00:04:14.057 "process_max_bandwidth_mb_sec": 0 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "bdev_iscsi_set_options", 00:04:14.057 "params": { 00:04:14.057 "timeout_sec": 30 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "bdev_nvme_set_options", 00:04:14.057 "params": { 00:04:14.057 "action_on_timeout": "none", 00:04:14.057 "timeout_us": 0, 00:04:14.057 "timeout_admin_us": 0, 00:04:14.057 "keep_alive_timeout_ms": 10000, 00:04:14.057 "arbitration_burst": 0, 00:04:14.057 "low_priority_weight": 0, 00:04:14.057 "medium_priority_weight": 0, 00:04:14.057 "high_priority_weight": 0, 00:04:14.057 "nvme_adminq_poll_period_us": 10000, 00:04:14.057 "nvme_ioq_poll_period_us": 0, 00:04:14.057 "io_queue_requests": 0, 00:04:14.057 "delay_cmd_submit": true, 00:04:14.057 "transport_retry_count": 4, 00:04:14.057 "bdev_retry_count": 3, 00:04:14.057 "transport_ack_timeout": 0, 00:04:14.057 "ctrlr_loss_timeout_sec": 0, 00:04:14.057 "reconnect_delay_sec": 0, 00:04:14.057 "fast_io_fail_timeout_sec": 0, 00:04:14.057 "disable_auto_failback": false, 00:04:14.057 "generate_uuids": false, 00:04:14.057 "transport_tos": 0, 00:04:14.057 "nvme_error_stat": false, 00:04:14.057 "rdma_srq_size": 0, 00:04:14.057 "io_path_stat": false, 00:04:14.057 "allow_accel_sequence": false, 00:04:14.057 "rdma_max_cq_size": 0, 00:04:14.057 "rdma_cm_event_timeout_ms": 0, 00:04:14.057 "dhchap_digests": [ 00:04:14.057 "sha256", 00:04:14.057 "sha384", 00:04:14.057 "sha512" 00:04:14.057 ], 00:04:14.057 "dhchap_dhgroups": [ 00:04:14.057 "null", 00:04:14.057 "ffdhe2048", 00:04:14.057 "ffdhe3072", 00:04:14.057 "ffdhe4096", 00:04:14.057 "ffdhe6144", 00:04:14.057 "ffdhe8192" 00:04:14.057 ] 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "bdev_nvme_set_hotplug", 00:04:14.057 "params": { 00:04:14.057 "period_us": 100000, 00:04:14.057 "enable": false 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "bdev_wait_for_examine" 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "scsi", 00:04:14.057 "config": null 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "scheduler", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "framework_set_scheduler", 00:04:14.057 "params": { 00:04:14.057 "name": "static" 00:04:14.057 } 00:04:14.057 } 00:04:14.057 ] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "vhost_scsi", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "vhost_blk", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "ublk", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "nbd", 00:04:14.057 "config": [] 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "subsystem": "nvmf", 00:04:14.057 "config": [ 00:04:14.057 { 00:04:14.057 "method": "nvmf_set_config", 00:04:14.057 "params": { 00:04:14.057 "discovery_filter": "match_any", 00:04:14.057 "admin_cmd_passthru": { 00:04:14.057 "identify_ctrlr": false 00:04:14.057 }, 00:04:14.057 "dhchap_digests": [ 00:04:14.057 "sha256", 00:04:14.057 "sha384", 00:04:14.057 "sha512" 00:04:14.057 ], 00:04:14.057 "dhchap_dhgroups": [ 00:04:14.057 "null", 00:04:14.057 "ffdhe2048", 00:04:14.057 "ffdhe3072", 00:04:14.057 "ffdhe4096", 00:04:14.057 "ffdhe6144", 00:04:14.057 "ffdhe8192" 00:04:14.057 ] 00:04:14.057 } 00:04:14.057 }, 00:04:14.057 { 00:04:14.057 "method": "nvmf_set_max_subsystems", 00:04:14.057 "params": { 00:04:14.057 "max_subsystems": 1024 00:04:14.057 } 00:04:14.057 }, 00:04:14.058 { 00:04:14.058 "method": "nvmf_set_crdt", 00:04:14.058 "params": { 00:04:14.058 "crdt1": 0, 00:04:14.058 "crdt2": 0, 00:04:14.058 "crdt3": 0 00:04:14.058 } 00:04:14.058 }, 00:04:14.058 { 00:04:14.058 "method": "nvmf_create_transport", 00:04:14.058 "params": { 00:04:14.058 "trtype": "TCP", 00:04:14.058 "max_queue_depth": 128, 00:04:14.058 "max_io_qpairs_per_ctrlr": 127, 00:04:14.058 "in_capsule_data_size": 4096, 00:04:14.058 "max_io_size": 131072, 00:04:14.058 "io_unit_size": 131072, 00:04:14.058 "max_aq_depth": 128, 00:04:14.058 "num_shared_buffers": 511, 00:04:14.058 "buf_cache_size": 4294967295, 00:04:14.058 "dif_insert_or_strip": false, 00:04:14.058 "zcopy": false, 00:04:14.058 "c2h_success": true, 00:04:14.058 "sock_priority": 0, 00:04:14.058 "abort_timeout_sec": 1, 00:04:14.058 "ack_timeout": 0, 00:04:14.058 "data_wr_pool_size": 0 00:04:14.058 } 00:04:14.058 } 00:04:14.058 ] 00:04:14.058 }, 00:04:14.058 { 00:04:14.058 "subsystem": "iscsi", 00:04:14.058 "config": [ 00:04:14.058 { 00:04:14.058 "method": "iscsi_set_options", 00:04:14.058 "params": { 00:04:14.058 "node_base": "iqn.2016-06.io.spdk", 00:04:14.058 "max_sessions": 128, 00:04:14.058 "max_connections_per_session": 2, 00:04:14.058 "max_queue_depth": 64, 00:04:14.058 "default_time2wait": 2, 00:04:14.058 "default_time2retain": 20, 00:04:14.058 "first_burst_length": 8192, 00:04:14.058 "immediate_data": true, 00:04:14.058 "allow_duplicated_isid": false, 00:04:14.058 "error_recovery_level": 0, 00:04:14.058 "nop_timeout": 60, 00:04:14.058 "nop_in_interval": 30, 00:04:14.058 "disable_chap": false, 00:04:14.058 "require_chap": false, 00:04:14.058 "mutual_chap": false, 00:04:14.058 "chap_group": 0, 00:04:14.058 "max_large_datain_per_connection": 64, 00:04:14.058 "max_r2t_per_connection": 4, 00:04:14.058 "pdu_pool_size": 36864, 00:04:14.058 "immediate_data_pool_size": 16384, 00:04:14.058 "data_out_pool_size": 2048 00:04:14.058 } 00:04:14.058 } 00:04:14.058 ] 00:04:14.058 } 00:04:14.058 ] 00:04:14.058 } 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3844235 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3844235 ']' 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3844235 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844235 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844235' 00:04:14.058 killing process with pid 3844235 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3844235 00:04:14.058 20:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3844235 00:04:14.627 20:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3844337 00:04:14.627 20:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.627 20:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3844337 ']' 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844337' 00:04:19.890 killing process with pid 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3844337 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.890 00:04:19.890 real 0m6.584s 00:04:19.890 user 0m6.200s 00:04:19.890 sys 0m0.721s 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.890 20:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.890 ************************************ 00:04:19.890 END TEST skip_rpc_with_json 00:04:19.890 ************************************ 00:04:19.890 20:45:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:19.890 20:45:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.890 20:45:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.890 20:45:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.149 ************************************ 00:04:20.149 START TEST skip_rpc_with_delay 00:04:20.149 ************************************ 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.149 [2024-11-26 20:45:10.910534] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.149 00:04:20.149 real 0m0.075s 00:04:20.149 user 0m0.049s 00:04:20.149 sys 0m0.025s 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.149 20:45:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.149 ************************************ 00:04:20.149 END TEST skip_rpc_with_delay 00:04:20.149 ************************************ 00:04:20.149 20:45:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.149 20:45:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.149 20:45:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.149 20:45:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.149 20:45:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.149 20:45:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.149 ************************************ 00:04:20.149 START TEST exit_on_failed_rpc_init 00:04:20.149 ************************************ 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3845547 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3845547 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3845547 ']' 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.149 20:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.149 [2024-11-26 20:45:11.031144] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:20.149 [2024-11-26 20:45:11.031218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845547 ] 00:04:20.408 [2024-11-26 20:45:11.103934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.408 [2024-11-26 20:45:11.166940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.666 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.667 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.667 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.667 [2024-11-26 20:45:11.511401] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:20.667 [2024-11-26 20:45:11.511477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845667 ] 00:04:20.667 [2024-11-26 20:45:11.581878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.925 [2024-11-26 20:45:11.647160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.925 [2024-11-26 20:45:11.647284] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:20.925 [2024-11-26 20:45:11.647307] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:20.925 [2024-11-26 20:45:11.647320] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3845547 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3845547 ']' 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3845547 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845547 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845547' 00:04:20.925 killing process with pid 3845547 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3845547 00:04:20.925 20:45:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3845547 00:04:21.491 00:04:21.491 real 0m1.244s 00:04:21.491 user 0m1.344s 00:04:21.491 sys 0m0.464s 00:04:21.491 20:45:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.491 20:45:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.491 ************************************ 00:04:21.491 END TEST exit_on_failed_rpc_init 00:04:21.491 ************************************ 00:04:21.491 20:45:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.492 00:04:21.492 real 0m13.729s 00:04:21.492 user 0m12.914s 00:04:21.492 sys 0m1.748s 00:04:21.492 20:45:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.492 20:45:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 ************************************ 00:04:21.492 END TEST skip_rpc 00:04:21.492 ************************************ 00:04:21.492 20:45:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.492 20:45:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.492 20:45:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.492 20:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:21.492 ************************************ 00:04:21.492 START TEST rpc_client 00:04:21.492 ************************************ 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.492 * Looking for test storage... 00:04:21.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.492 20:45:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.492 --rc genhtml_branch_coverage=1 00:04:21.492 --rc genhtml_function_coverage=1 00:04:21.492 --rc genhtml_legend=1 00:04:21.492 --rc geninfo_all_blocks=1 00:04:21.492 --rc geninfo_unexecuted_blocks=1 00:04:21.492 00:04:21.492 ' 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.492 --rc genhtml_branch_coverage=1 00:04:21.492 --rc genhtml_function_coverage=1 00:04:21.492 --rc genhtml_legend=1 00:04:21.492 --rc geninfo_all_blocks=1 00:04:21.492 --rc geninfo_unexecuted_blocks=1 00:04:21.492 00:04:21.492 ' 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.492 --rc genhtml_branch_coverage=1 00:04:21.492 --rc genhtml_function_coverage=1 00:04:21.492 --rc genhtml_legend=1 00:04:21.492 --rc geninfo_all_blocks=1 00:04:21.492 --rc geninfo_unexecuted_blocks=1 00:04:21.492 00:04:21.492 ' 00:04:21.492 20:45:12 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.492 --rc genhtml_branch_coverage=1 00:04:21.492 --rc genhtml_function_coverage=1 00:04:21.492 --rc genhtml_legend=1 00:04:21.492 --rc geninfo_all_blocks=1 00:04:21.492 --rc geninfo_unexecuted_blocks=1 00:04:21.492 00:04:21.492 ' 00:04:21.751 20:45:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:21.751 OK 00:04:21.751 20:45:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.751 00:04:21.751 real 0m0.153s 00:04:21.751 user 0m0.100s 00:04:21.751 sys 0m0.060s 00:04:21.751 20:45:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.751 20:45:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:21.751 ************************************ 00:04:21.751 END TEST rpc_client 00:04:21.751 ************************************ 00:04:21.751 20:45:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.751 20:45:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.751 20:45:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.751 20:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:21.751 ************************************ 00:04:21.751 START TEST json_config 00:04:21.751 ************************************ 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.751 20:45:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.751 20:45:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.751 20:45:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.751 20:45:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.751 20:45:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.751 20:45:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:21.751 20:45:12 json_config -- scripts/common.sh@345 -- # : 1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.751 20:45:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.751 20:45:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@353 -- # local d=1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.751 20:45:12 json_config -- scripts/common.sh@355 -- # echo 1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.751 20:45:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@353 -- # local d=2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.751 20:45:12 json_config -- scripts/common.sh@355 -- # echo 2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.751 20:45:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.751 20:45:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.751 20:45:12 json_config -- scripts/common.sh@368 -- # return 0 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.751 --rc genhtml_branch_coverage=1 00:04:21.751 --rc genhtml_function_coverage=1 00:04:21.751 --rc genhtml_legend=1 00:04:21.751 --rc geninfo_all_blocks=1 00:04:21.751 --rc geninfo_unexecuted_blocks=1 00:04:21.751 00:04:21.751 ' 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.751 --rc genhtml_branch_coverage=1 00:04:21.751 --rc genhtml_function_coverage=1 00:04:21.751 --rc genhtml_legend=1 00:04:21.751 --rc geninfo_all_blocks=1 00:04:21.751 --rc geninfo_unexecuted_blocks=1 00:04:21.751 00:04:21.751 ' 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.751 --rc genhtml_branch_coverage=1 00:04:21.751 --rc genhtml_function_coverage=1 00:04:21.751 --rc genhtml_legend=1 00:04:21.751 --rc geninfo_all_blocks=1 00:04:21.751 --rc geninfo_unexecuted_blocks=1 00:04:21.751 00:04:21.751 ' 00:04:21.751 20:45:12 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.751 --rc genhtml_branch_coverage=1 00:04:21.751 --rc genhtml_function_coverage=1 00:04:21.751 --rc genhtml_legend=1 00:04:21.751 --rc geninfo_all_blocks=1 00:04:21.751 --rc geninfo_unexecuted_blocks=1 00:04:21.751 00:04:21.751 ' 00:04:21.751 20:45:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.751 20:45:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.751 20:45:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.751 20:45:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.751 20:45:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.751 20:45:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.752 20:45:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.752 20:45:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.752 20:45:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.752 20:45:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:21.752 20:45:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@51 -- # : 0 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.752 20:45:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:21.752 INFO: JSON configuration test init 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.752 20:45:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:21.752 20:45:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.752 20:45:12 json_config -- json_config/common.sh@10 -- # shift 00:04:21.752 20:45:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.752 20:45:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.752 20:45:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.752 20:45:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.752 20:45:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.752 20:45:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3845926 00:04:21.752 20:45:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:21.752 20:45:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.752 Waiting for target to run... 00:04:21.752 20:45:12 json_config -- json_config/common.sh@25 -- # waitforlisten 3845926 /var/tmp/spdk_tgt.sock 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 3845926 ']' 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.752 20:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.009 [2024-11-26 20:45:12.709114] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:22.009 [2024-11-26 20:45:12.709209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845926 ] 00:04:22.574 [2024-11-26 20:45:13.241094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.574 [2024-11-26 20:45:13.300513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:22.830 20:45:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.830 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.830 20:45:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.830 20:45:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:22.830 20:45:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.133 20:45:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.133 20:45:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:26.133 20:45:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.133 20:45:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@54 -- # sort 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:26.391 20:45:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.391 20:45:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:26.391 20:45:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.391 20:45:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:26.391 20:45:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.391 20:45:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.649 MallocForNvmf0 00:04:26.649 20:45:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.649 20:45:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.906 MallocForNvmf1 00:04:26.906 20:45:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.906 20:45:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.164 [2024-11-26 20:45:18.011313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.164 20:45:18 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.164 20:45:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.422 20:45:18 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.422 20:45:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.679 20:45:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.679 20:45:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.937 20:45:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.937 20:45:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.195 [2024-11-26 20:45:19.082777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.195 20:45:19 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:28.195 20:45:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.195 20:45:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.195 20:45:19 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:28.195 20:45:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.195 20:45:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.451 20:45:19 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:28.451 20:45:19 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.451 20:45:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.708 MallocBdevForConfigChangeCheck 00:04:28.708 20:45:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:28.708 20:45:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.708 20:45:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.708 20:45:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:28.708 20:45:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.965 20:45:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:28.965 INFO: shutting down applications... 00:04:28.965 20:45:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:28.965 20:45:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:28.965 20:45:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:28.966 20:45:19 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:30.930 Calling clear_iscsi_subsystem 00:04:30.930 Calling clear_nvmf_subsystem 00:04:30.930 Calling clear_nbd_subsystem 00:04:30.930 Calling clear_ublk_subsystem 00:04:30.930 Calling clear_vhost_blk_subsystem 00:04:30.930 Calling clear_vhost_scsi_subsystem 00:04:30.930 Calling clear_bdev_subsystem 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:30.930 20:45:21 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.188 20:45:21 json_config -- json_config/json_config.sh@352 -- # break 00:04:31.188 20:45:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:31.188 20:45:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:31.188 20:45:21 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.188 20:45:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.188 20:45:21 json_config -- json_config/common.sh@35 -- # [[ -n 3845926 ]] 00:04:31.188 20:45:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3845926 00:04:31.188 20:45:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.188 20:45:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.188 20:45:21 json_config -- json_config/common.sh@41 -- # kill -0 3845926 00:04:31.188 20:45:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.755 20:45:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.755 20:45:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.755 20:45:22 json_config -- json_config/common.sh@41 -- # kill -0 3845926 00:04:31.755 20:45:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.755 20:45:22 json_config -- json_config/common.sh@43 -- # break 00:04:31.755 20:45:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.755 20:45:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.755 SPDK target shutdown done 00:04:31.755 20:45:22 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:31.755 INFO: relaunching applications... 00:04:31.755 20:45:22 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.755 20:45:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.755 20:45:22 json_config -- json_config/common.sh@10 -- # shift 00:04:31.755 20:45:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.755 20:45:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.755 20:45:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.755 20:45:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.755 20:45:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.755 20:45:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3847241 00:04:31.755 20:45:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.755 20:45:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.755 Waiting for target to run... 00:04:31.755 20:45:22 json_config -- json_config/common.sh@25 -- # waitforlisten 3847241 /var/tmp/spdk_tgt.sock 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 3847241 ']' 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.755 20:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.755 [2024-11-26 20:45:22.503905] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:31.755 [2024-11-26 20:45:22.504020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847241 ] 00:04:32.013 [2024-11-26 20:45:22.871643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.013 [2024-11-26 20:45:22.919845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.293 [2024-11-26 20:45:25.984957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.293 [2024-11-26 20:45:26.017487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.293 20:45:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.293 20:45:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:35.293 20:45:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:35.293 00:04:35.293 20:45:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:35.293 20:45:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:35.293 INFO: Checking if target configuration is the same... 00:04:35.293 20:45:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.293 20:45:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:35.293 20:45:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.293 + '[' 2 -ne 2 ']' 00:04:35.293 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.293 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.293 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.293 +++ basename /dev/fd/62 00:04:35.293 ++ mktemp /tmp/62.XXX 00:04:35.293 + tmp_file_1=/tmp/62.Bbh 00:04:35.293 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.293 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.293 + tmp_file_2=/tmp/spdk_tgt_config.json.knT 00:04:35.293 + ret=0 00:04:35.293 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.549 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.807 + diff -u /tmp/62.Bbh /tmp/spdk_tgt_config.json.knT 00:04:35.807 + echo 'INFO: JSON config files are the same' 00:04:35.807 INFO: JSON config files are the same 00:04:35.807 + rm /tmp/62.Bbh /tmp/spdk_tgt_config.json.knT 00:04:35.807 + exit 0 00:04:35.807 20:45:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:35.807 20:45:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.807 INFO: changing configuration and checking if this can be detected... 00:04:35.807 20:45:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.807 20:45:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.066 20:45:26 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.066 20:45:26 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:36.066 20:45:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.066 + '[' 2 -ne 2 ']' 00:04:36.066 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.066 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.066 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.066 +++ basename /dev/fd/62 00:04:36.066 ++ mktemp /tmp/62.XXX 00:04:36.066 + tmp_file_1=/tmp/62.qkc 00:04:36.066 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.066 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.066 + tmp_file_2=/tmp/spdk_tgt_config.json.gff 00:04:36.066 + ret=0 00:04:36.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.325 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.325 + diff -u /tmp/62.qkc /tmp/spdk_tgt_config.json.gff 00:04:36.325 + ret=1 00:04:36.325 + echo '=== Start of file: /tmp/62.qkc ===' 00:04:36.325 + cat /tmp/62.qkc 00:04:36.325 + echo '=== End of file: /tmp/62.qkc ===' 00:04:36.325 + echo '' 00:04:36.325 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gff ===' 00:04:36.325 + cat /tmp/spdk_tgt_config.json.gff 00:04:36.325 + echo '=== End of file: /tmp/spdk_tgt_config.json.gff ===' 00:04:36.325 + echo '' 00:04:36.325 + rm /tmp/62.qkc /tmp/spdk_tgt_config.json.gff 00:04:36.325 + exit 1 00:04:36.325 20:45:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:36.325 INFO: configuration change detected. 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 3847241 ]] 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.583 20:45:27 json_config -- json_config/json_config.sh@330 -- # killprocess 3847241 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 3847241 ']' 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@958 -- # kill -0 3847241 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@959 -- # uname 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847241 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847241' 00:04:36.583 killing process with pid 3847241 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@973 -- # kill 3847241 00:04:36.583 20:45:27 json_config -- common/autotest_common.sh@978 -- # wait 3847241 00:04:38.481 20:45:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.481 20:45:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:38.481 20:45:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.481 20:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.481 20:45:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:38.481 20:45:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:38.481 INFO: Success 00:04:38.481 00:04:38.481 real 0m16.485s 00:04:38.481 user 0m18.098s 00:04:38.481 sys 0m2.688s 00:04:38.481 20:45:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.481 20:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.481 ************************************ 00:04:38.481 END TEST json_config 00:04:38.481 ************************************ 00:04:38.481 20:45:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.481 20:45:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.481 20:45:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.481 20:45:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.481 ************************************ 00:04:38.481 START TEST json_config_extra_key 00:04:38.481 ************************************ 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.481 --rc genhtml_branch_coverage=1 00:04:38.481 --rc genhtml_function_coverage=1 00:04:38.481 --rc genhtml_legend=1 00:04:38.481 --rc geninfo_all_blocks=1 00:04:38.481 --rc geninfo_unexecuted_blocks=1 00:04:38.481 00:04:38.481 ' 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.481 --rc genhtml_branch_coverage=1 00:04:38.481 --rc genhtml_function_coverage=1 00:04:38.481 --rc genhtml_legend=1 00:04:38.481 --rc geninfo_all_blocks=1 00:04:38.481 --rc geninfo_unexecuted_blocks=1 00:04:38.481 00:04:38.481 ' 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.481 --rc genhtml_branch_coverage=1 00:04:38.481 --rc genhtml_function_coverage=1 00:04:38.481 --rc genhtml_legend=1 00:04:38.481 --rc geninfo_all_blocks=1 00:04:38.481 --rc geninfo_unexecuted_blocks=1 00:04:38.481 00:04:38.481 ' 00:04:38.481 20:45:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.481 --rc genhtml_branch_coverage=1 00:04:38.481 --rc genhtml_function_coverage=1 00:04:38.481 --rc genhtml_legend=1 00:04:38.481 --rc geninfo_all_blocks=1 00:04:38.481 --rc geninfo_unexecuted_blocks=1 00:04:38.481 00:04:38.481 ' 00:04:38.481 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.481 20:45:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.481 20:45:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.482 20:45:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.482 20:45:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.482 20:45:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.482 20:45:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.482 20:45:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.482 20:45:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:38.482 20:45:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.482 20:45:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.482 INFO: launching applications... 00:04:38.482 20:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3848164 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.482 Waiting for target to run... 00:04:38.482 20:45:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3848164 /var/tmp/spdk_tgt.sock 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3848164 ']' 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.482 20:45:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.482 [2024-11-26 20:45:29.223593] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:38.482 [2024-11-26 20:45:29.223723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848164 ] 00:04:38.739 [2024-11-26 20:45:29.584066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.739 [2024-11-26 20:45:29.632294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.305 20:45:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.305 20:45:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.305 00:04:39.305 20:45:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.305 INFO: shutting down applications... 00:04:39.305 20:45:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3848164 ]] 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3848164 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3848164 00:04:39.305 20:45:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3848164 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.871 20:45:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.871 SPDK target shutdown done 00:04:39.871 20:45:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:39.871 Success 00:04:39.871 00:04:39.871 real 0m1.716s 00:04:39.871 user 0m1.764s 00:04:39.871 sys 0m0.453s 00:04:39.871 20:45:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.871 20:45:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.871 ************************************ 00:04:39.871 END TEST json_config_extra_key 00:04:39.871 ************************************ 00:04:39.871 20:45:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.871 20:45:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.871 20:45:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.871 20:45:30 -- common/autotest_common.sh@10 -- # set +x 00:04:39.871 ************************************ 00:04:39.871 START TEST alias_rpc 00:04:39.871 ************************************ 00:04:39.871 20:45:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.130 * Looking for test storage... 00:04:40.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.130 20:45:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.130 --rc genhtml_branch_coverage=1 00:04:40.130 --rc genhtml_function_coverage=1 00:04:40.130 --rc genhtml_legend=1 00:04:40.130 --rc geninfo_all_blocks=1 00:04:40.130 --rc geninfo_unexecuted_blocks=1 00:04:40.130 00:04:40.130 ' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.130 --rc genhtml_branch_coverage=1 00:04:40.130 --rc genhtml_function_coverage=1 00:04:40.130 --rc genhtml_legend=1 00:04:40.130 --rc geninfo_all_blocks=1 00:04:40.130 --rc geninfo_unexecuted_blocks=1 00:04:40.130 00:04:40.130 ' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.130 --rc genhtml_branch_coverage=1 00:04:40.130 --rc genhtml_function_coverage=1 00:04:40.130 --rc genhtml_legend=1 00:04:40.130 --rc geninfo_all_blocks=1 00:04:40.130 --rc geninfo_unexecuted_blocks=1 00:04:40.130 00:04:40.130 ' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.130 --rc genhtml_branch_coverage=1 00:04:40.130 --rc genhtml_function_coverage=1 00:04:40.130 --rc genhtml_legend=1 00:04:40.130 --rc geninfo_all_blocks=1 00:04:40.130 --rc geninfo_unexecuted_blocks=1 00:04:40.130 00:04:40.130 ' 00:04:40.130 20:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.130 20:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3848365 00:04:40.130 20:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.130 20:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3848365 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3848365 ']' 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.130 20:45:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.130 [2024-11-26 20:45:30.984296] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:40.130 [2024-11-26 20:45:30.984389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848365 ] 00:04:40.130 [2024-11-26 20:45:31.049500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.389 [2024-11-26 20:45:31.112734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.648 20:45:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.648 20:45:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.648 20:45:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:40.906 20:45:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3848365 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3848365 ']' 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3848365 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848365 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848365' 00:04:40.906 killing process with pid 3848365 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 3848365 00:04:40.906 20:45:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 3848365 00:04:41.473 00:04:41.473 real 0m1.381s 00:04:41.473 user 0m1.482s 00:04:41.473 sys 0m0.450s 00:04:41.473 20:45:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.473 20:45:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.473 ************************************ 00:04:41.473 END TEST alias_rpc 00:04:41.473 ************************************ 00:04:41.473 20:45:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:41.473 20:45:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.473 20:45:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.473 20:45:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.473 20:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:41.473 ************************************ 00:04:41.473 START TEST spdkcli_tcp 00:04:41.473 ************************************ 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.473 * Looking for test storage... 00:04:41.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.473 20:45:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.473 --rc genhtml_branch_coverage=1 00:04:41.473 --rc genhtml_function_coverage=1 00:04:41.473 --rc genhtml_legend=1 00:04:41.473 --rc geninfo_all_blocks=1 00:04:41.473 --rc geninfo_unexecuted_blocks=1 00:04:41.473 00:04:41.473 ' 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.473 --rc genhtml_branch_coverage=1 00:04:41.473 --rc genhtml_function_coverage=1 00:04:41.473 --rc genhtml_legend=1 00:04:41.473 --rc geninfo_all_blocks=1 00:04:41.473 --rc geninfo_unexecuted_blocks=1 00:04:41.473 00:04:41.473 ' 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.473 --rc genhtml_branch_coverage=1 00:04:41.473 --rc genhtml_function_coverage=1 00:04:41.473 --rc genhtml_legend=1 00:04:41.473 --rc geninfo_all_blocks=1 00:04:41.473 --rc geninfo_unexecuted_blocks=1 00:04:41.473 00:04:41.473 ' 00:04:41.473 20:45:32 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.474 --rc genhtml_branch_coverage=1 00:04:41.474 --rc genhtml_function_coverage=1 00:04:41.474 --rc genhtml_legend=1 00:04:41.474 --rc geninfo_all_blocks=1 00:04:41.474 --rc geninfo_unexecuted_blocks=1 00:04:41.474 00:04:41.474 ' 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3848565 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:41.474 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3848565 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3848565 ']' 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.474 20:45:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.733 [2024-11-26 20:45:32.414280] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:41.733 [2024-11-26 20:45:32.414364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848565 ] 00:04:41.733 [2024-11-26 20:45:32.481355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.733 [2024-11-26 20:45:32.544079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.733 [2024-11-26 20:45:32.544085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.991 20:45:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.992 20:45:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:41.992 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3848690 00:04:41.992 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.992 20:45:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.250 [ 00:04:42.250 "bdev_malloc_delete", 00:04:42.250 "bdev_malloc_create", 00:04:42.250 "bdev_null_resize", 00:04:42.250 "bdev_null_delete", 00:04:42.251 "bdev_null_create", 00:04:42.251 "bdev_nvme_cuse_unregister", 00:04:42.251 "bdev_nvme_cuse_register", 00:04:42.251 "bdev_opal_new_user", 00:04:42.251 "bdev_opal_set_lock_state", 00:04:42.251 "bdev_opal_delete", 00:04:42.251 "bdev_opal_get_info", 00:04:42.251 "bdev_opal_create", 00:04:42.251 "bdev_nvme_opal_revert", 00:04:42.251 "bdev_nvme_opal_init", 00:04:42.251 "bdev_nvme_send_cmd", 00:04:42.251 "bdev_nvme_set_keys", 00:04:42.251 "bdev_nvme_get_path_iostat", 00:04:42.251 "bdev_nvme_get_mdns_discovery_info", 00:04:42.251 "bdev_nvme_stop_mdns_discovery", 00:04:42.251 "bdev_nvme_start_mdns_discovery", 00:04:42.251 "bdev_nvme_set_multipath_policy", 00:04:42.251 "bdev_nvme_set_preferred_path", 00:04:42.251 "bdev_nvme_get_io_paths", 00:04:42.251 "bdev_nvme_remove_error_injection", 00:04:42.251 "bdev_nvme_add_error_injection", 00:04:42.251 "bdev_nvme_get_discovery_info", 00:04:42.251 "bdev_nvme_stop_discovery", 00:04:42.251 "bdev_nvme_start_discovery", 00:04:42.251 "bdev_nvme_get_controller_health_info", 00:04:42.251 "bdev_nvme_disable_controller", 00:04:42.251 "bdev_nvme_enable_controller", 00:04:42.251 "bdev_nvme_reset_controller", 00:04:42.251 "bdev_nvme_get_transport_statistics", 00:04:42.251 "bdev_nvme_apply_firmware", 00:04:42.251 "bdev_nvme_detach_controller", 00:04:42.251 "bdev_nvme_get_controllers", 00:04:42.251 "bdev_nvme_attach_controller", 00:04:42.251 "bdev_nvme_set_hotplug", 00:04:42.251 "bdev_nvme_set_options", 00:04:42.251 "bdev_passthru_delete", 00:04:42.251 "bdev_passthru_create", 00:04:42.251 "bdev_lvol_set_parent_bdev", 00:04:42.251 "bdev_lvol_set_parent", 00:04:42.251 "bdev_lvol_check_shallow_copy", 00:04:42.251 "bdev_lvol_start_shallow_copy", 00:04:42.251 "bdev_lvol_grow_lvstore", 00:04:42.251 "bdev_lvol_get_lvols", 00:04:42.251 "bdev_lvol_get_lvstores", 00:04:42.251 "bdev_lvol_delete", 00:04:42.251 "bdev_lvol_set_read_only", 00:04:42.251 "bdev_lvol_resize", 00:04:42.251 "bdev_lvol_decouple_parent", 00:04:42.251 "bdev_lvol_inflate", 00:04:42.251 "bdev_lvol_rename", 00:04:42.251 "bdev_lvol_clone_bdev", 00:04:42.251 "bdev_lvol_clone", 00:04:42.251 "bdev_lvol_snapshot", 00:04:42.251 "bdev_lvol_create", 00:04:42.251 "bdev_lvol_delete_lvstore", 00:04:42.251 "bdev_lvol_rename_lvstore", 00:04:42.251 "bdev_lvol_create_lvstore", 00:04:42.251 "bdev_raid_set_options", 00:04:42.251 "bdev_raid_remove_base_bdev", 00:04:42.251 "bdev_raid_add_base_bdev", 00:04:42.251 "bdev_raid_delete", 00:04:42.251 "bdev_raid_create", 00:04:42.251 "bdev_raid_get_bdevs", 00:04:42.251 "bdev_error_inject_error", 00:04:42.251 "bdev_error_delete", 00:04:42.251 "bdev_error_create", 00:04:42.251 "bdev_split_delete", 00:04:42.251 "bdev_split_create", 00:04:42.251 "bdev_delay_delete", 00:04:42.251 "bdev_delay_create", 00:04:42.251 "bdev_delay_update_latency", 00:04:42.251 "bdev_zone_block_delete", 00:04:42.251 "bdev_zone_block_create", 00:04:42.251 "blobfs_create", 00:04:42.251 "blobfs_detect", 00:04:42.251 "blobfs_set_cache_size", 00:04:42.251 "bdev_aio_delete", 00:04:42.251 "bdev_aio_rescan", 00:04:42.251 "bdev_aio_create", 00:04:42.251 "bdev_ftl_set_property", 00:04:42.251 "bdev_ftl_get_properties", 00:04:42.251 "bdev_ftl_get_stats", 00:04:42.251 "bdev_ftl_unmap", 00:04:42.251 "bdev_ftl_unload", 00:04:42.251 "bdev_ftl_delete", 00:04:42.251 "bdev_ftl_load", 00:04:42.251 "bdev_ftl_create", 00:04:42.251 "bdev_virtio_attach_controller", 00:04:42.251 "bdev_virtio_scsi_get_devices", 00:04:42.251 "bdev_virtio_detach_controller", 00:04:42.251 "bdev_virtio_blk_set_hotplug", 00:04:42.251 "bdev_iscsi_delete", 00:04:42.251 "bdev_iscsi_create", 00:04:42.251 "bdev_iscsi_set_options", 00:04:42.251 "accel_error_inject_error", 00:04:42.251 "ioat_scan_accel_module", 00:04:42.251 "dsa_scan_accel_module", 00:04:42.251 "iaa_scan_accel_module", 00:04:42.251 "vfu_virtio_create_fs_endpoint", 00:04:42.251 "vfu_virtio_create_scsi_endpoint", 00:04:42.251 "vfu_virtio_scsi_remove_target", 00:04:42.251 "vfu_virtio_scsi_add_target", 00:04:42.251 "vfu_virtio_create_blk_endpoint", 00:04:42.251 "vfu_virtio_delete_endpoint", 00:04:42.251 "keyring_file_remove_key", 00:04:42.251 "keyring_file_add_key", 00:04:42.251 "keyring_linux_set_options", 00:04:42.251 "fsdev_aio_delete", 00:04:42.251 "fsdev_aio_create", 00:04:42.251 "iscsi_get_histogram", 00:04:42.251 "iscsi_enable_histogram", 00:04:42.251 "iscsi_set_options", 00:04:42.251 "iscsi_get_auth_groups", 00:04:42.251 "iscsi_auth_group_remove_secret", 00:04:42.251 "iscsi_auth_group_add_secret", 00:04:42.251 "iscsi_delete_auth_group", 00:04:42.251 "iscsi_create_auth_group", 00:04:42.251 "iscsi_set_discovery_auth", 00:04:42.251 "iscsi_get_options", 00:04:42.251 "iscsi_target_node_request_logout", 00:04:42.251 "iscsi_target_node_set_redirect", 00:04:42.251 "iscsi_target_node_set_auth", 00:04:42.251 "iscsi_target_node_add_lun", 00:04:42.251 "iscsi_get_stats", 00:04:42.251 "iscsi_get_connections", 00:04:42.251 "iscsi_portal_group_set_auth", 00:04:42.251 "iscsi_start_portal_group", 00:04:42.251 "iscsi_delete_portal_group", 00:04:42.251 "iscsi_create_portal_group", 00:04:42.251 "iscsi_get_portal_groups", 00:04:42.251 "iscsi_delete_target_node", 00:04:42.251 "iscsi_target_node_remove_pg_ig_maps", 00:04:42.251 "iscsi_target_node_add_pg_ig_maps", 00:04:42.251 "iscsi_create_target_node", 00:04:42.251 "iscsi_get_target_nodes", 00:04:42.251 "iscsi_delete_initiator_group", 00:04:42.251 "iscsi_initiator_group_remove_initiators", 00:04:42.251 "iscsi_initiator_group_add_initiators", 00:04:42.251 "iscsi_create_initiator_group", 00:04:42.251 "iscsi_get_initiator_groups", 00:04:42.251 "nvmf_set_crdt", 00:04:42.251 "nvmf_set_config", 00:04:42.251 "nvmf_set_max_subsystems", 00:04:42.251 "nvmf_stop_mdns_prr", 00:04:42.251 "nvmf_publish_mdns_prr", 00:04:42.251 "nvmf_subsystem_get_listeners", 00:04:42.251 "nvmf_subsystem_get_qpairs", 00:04:42.251 "nvmf_subsystem_get_controllers", 00:04:42.251 "nvmf_get_stats", 00:04:42.251 "nvmf_get_transports", 00:04:42.251 "nvmf_create_transport", 00:04:42.251 "nvmf_get_targets", 00:04:42.251 "nvmf_delete_target", 00:04:42.251 "nvmf_create_target", 00:04:42.251 "nvmf_subsystem_allow_any_host", 00:04:42.251 "nvmf_subsystem_set_keys", 00:04:42.251 "nvmf_subsystem_remove_host", 00:04:42.251 "nvmf_subsystem_add_host", 00:04:42.251 "nvmf_ns_remove_host", 00:04:42.251 "nvmf_ns_add_host", 00:04:42.251 "nvmf_subsystem_remove_ns", 00:04:42.251 "nvmf_subsystem_set_ns_ana_group", 00:04:42.251 "nvmf_subsystem_add_ns", 00:04:42.251 "nvmf_subsystem_listener_set_ana_state", 00:04:42.251 "nvmf_discovery_get_referrals", 00:04:42.251 "nvmf_discovery_remove_referral", 00:04:42.251 "nvmf_discovery_add_referral", 00:04:42.251 "nvmf_subsystem_remove_listener", 00:04:42.251 "nvmf_subsystem_add_listener", 00:04:42.251 "nvmf_delete_subsystem", 00:04:42.251 "nvmf_create_subsystem", 00:04:42.251 "nvmf_get_subsystems", 00:04:42.251 "env_dpdk_get_mem_stats", 00:04:42.251 "nbd_get_disks", 00:04:42.251 "nbd_stop_disk", 00:04:42.251 "nbd_start_disk", 00:04:42.251 "ublk_recover_disk", 00:04:42.251 "ublk_get_disks", 00:04:42.251 "ublk_stop_disk", 00:04:42.251 "ublk_start_disk", 00:04:42.251 "ublk_destroy_target", 00:04:42.251 "ublk_create_target", 00:04:42.251 "virtio_blk_create_transport", 00:04:42.251 "virtio_blk_get_transports", 00:04:42.251 "vhost_controller_set_coalescing", 00:04:42.251 "vhost_get_controllers", 00:04:42.251 "vhost_delete_controller", 00:04:42.251 "vhost_create_blk_controller", 00:04:42.251 "vhost_scsi_controller_remove_target", 00:04:42.251 "vhost_scsi_controller_add_target", 00:04:42.251 "vhost_start_scsi_controller", 00:04:42.251 "vhost_create_scsi_controller", 00:04:42.251 "thread_set_cpumask", 00:04:42.251 "scheduler_set_options", 00:04:42.251 "framework_get_governor", 00:04:42.251 "framework_get_scheduler", 00:04:42.251 "framework_set_scheduler", 00:04:42.251 "framework_get_reactors", 00:04:42.251 "thread_get_io_channels", 00:04:42.251 "thread_get_pollers", 00:04:42.251 "thread_get_stats", 00:04:42.251 "framework_monitor_context_switch", 00:04:42.251 "spdk_kill_instance", 00:04:42.251 "log_enable_timestamps", 00:04:42.251 "log_get_flags", 00:04:42.251 "log_clear_flag", 00:04:42.251 "log_set_flag", 00:04:42.251 "log_get_level", 00:04:42.251 "log_set_level", 00:04:42.251 "log_get_print_level", 00:04:42.251 "log_set_print_level", 00:04:42.251 "framework_enable_cpumask_locks", 00:04:42.251 "framework_disable_cpumask_locks", 00:04:42.251 "framework_wait_init", 00:04:42.251 "framework_start_init", 00:04:42.251 "scsi_get_devices", 00:04:42.251 "bdev_get_histogram", 00:04:42.251 "bdev_enable_histogram", 00:04:42.251 "bdev_set_qos_limit", 00:04:42.251 "bdev_set_qd_sampling_period", 00:04:42.251 "bdev_get_bdevs", 00:04:42.251 "bdev_reset_iostat", 00:04:42.251 "bdev_get_iostat", 00:04:42.251 "bdev_examine", 00:04:42.251 "bdev_wait_for_examine", 00:04:42.251 "bdev_set_options", 00:04:42.251 "accel_get_stats", 00:04:42.251 "accel_set_options", 00:04:42.251 "accel_set_driver", 00:04:42.251 "accel_crypto_key_destroy", 00:04:42.251 "accel_crypto_keys_get", 00:04:42.251 "accel_crypto_key_create", 00:04:42.251 "accel_assign_opc", 00:04:42.251 "accel_get_module_info", 00:04:42.251 "accel_get_opc_assignments", 00:04:42.251 "vmd_rescan", 00:04:42.251 "vmd_remove_device", 00:04:42.252 "vmd_enable", 00:04:42.252 "sock_get_default_impl", 00:04:42.252 "sock_set_default_impl", 00:04:42.252 "sock_impl_set_options", 00:04:42.252 "sock_impl_get_options", 00:04:42.252 "iobuf_get_stats", 00:04:42.252 "iobuf_set_options", 00:04:42.252 "keyring_get_keys", 00:04:42.252 "vfu_tgt_set_base_path", 00:04:42.252 "framework_get_pci_devices", 00:04:42.252 "framework_get_config", 00:04:42.252 "framework_get_subsystems", 00:04:42.252 "fsdev_set_opts", 00:04:42.252 "fsdev_get_opts", 00:04:42.252 "trace_get_info", 00:04:42.252 "trace_get_tpoint_group_mask", 00:04:42.252 "trace_disable_tpoint_group", 00:04:42.252 "trace_enable_tpoint_group", 00:04:42.252 "trace_clear_tpoint_mask", 00:04:42.252 "trace_set_tpoint_mask", 00:04:42.252 "notify_get_notifications", 00:04:42.252 "notify_get_types", 00:04:42.252 "spdk_get_version", 00:04:42.252 "rpc_get_methods" 00:04:42.252 ] 00:04:42.252 20:45:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.252 20:45:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:42.252 20:45:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3848565 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3848565 ']' 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3848565 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848565 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848565' 00:04:42.252 killing process with pid 3848565 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3848565 00:04:42.252 20:45:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3848565 00:04:42.818 00:04:42.818 real 0m1.379s 00:04:42.818 user 0m2.462s 00:04:42.818 sys 0m0.494s 00:04:42.818 20:45:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.818 20:45:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 END TEST spdkcli_tcp 00:04:42.818 ************************************ 00:04:42.818 20:45:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.818 20:45:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.818 20:45:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.818 20:45:33 -- common/autotest_common.sh@10 -- # set +x 00:04:42.818 ************************************ 00:04:42.818 START TEST dpdk_mem_utility 00:04:42.818 ************************************ 00:04:42.818 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.818 * Looking for test storage... 00:04:42.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:42.818 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.818 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.818 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.078 20:45:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.078 --rc genhtml_branch_coverage=1 00:04:43.078 --rc genhtml_function_coverage=1 00:04:43.078 --rc genhtml_legend=1 00:04:43.078 --rc geninfo_all_blocks=1 00:04:43.078 --rc geninfo_unexecuted_blocks=1 00:04:43.078 00:04:43.078 ' 00:04:43.078 20:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.078 20:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3848894 00:04:43.078 20:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.078 20:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3848894 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3848894 ']' 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.078 20:45:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.078 [2024-11-26 20:45:33.843813] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:43.079 [2024-11-26 20:45:33.843903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848894 ] 00:04:43.079 [2024-11-26 20:45:33.913436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.079 [2024-11-26 20:45:33.977119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.337 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.337 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:43.337 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.337 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.337 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.337 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.337 { 00:04:43.337 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.337 } 00:04:43.337 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.337 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.596 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:43.596 1 heaps totaling size 818.000000 MiB 00:04:43.596 size: 818.000000 MiB heap id: 0 00:04:43.596 end heaps---------- 00:04:43.596 9 mempools totaling size 603.782043 MiB 00:04:43.596 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.596 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.596 size: 100.555481 MiB name: bdev_io_3848894 00:04:43.596 size: 50.003479 MiB name: msgpool_3848894 00:04:43.596 size: 36.509338 MiB name: fsdev_io_3848894 00:04:43.596 size: 21.763794 MiB name: PDU_Pool 00:04:43.596 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.596 size: 4.133484 MiB name: evtpool_3848894 00:04:43.596 size: 0.026123 MiB name: Session_Pool 00:04:43.596 end mempools------- 00:04:43.596 6 memzones totaling size 4.142822 MiB 00:04:43.596 size: 1.000366 MiB name: RG_ring_0_3848894 00:04:43.596 size: 1.000366 MiB name: RG_ring_1_3848894 00:04:43.596 size: 1.000366 MiB name: RG_ring_4_3848894 00:04:43.596 size: 1.000366 MiB name: RG_ring_5_3848894 00:04:43.596 size: 0.125366 MiB name: RG_ring_2_3848894 00:04:43.596 size: 0.015991 MiB name: RG_ring_3_3848894 00:04:43.596 end memzones------- 00:04:43.596 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.596 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:43.596 list of free elements. size: 10.852478 MiB 00:04:43.596 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:43.596 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:43.596 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:43.596 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:43.596 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:43.596 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:43.596 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:43.596 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:43.596 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:43.596 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:43.596 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:43.596 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:43.596 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:43.596 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:43.596 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:43.596 list of standard malloc elements. size: 199.218628 MiB 00:04:43.596 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:43.596 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:43.596 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:43.596 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:43.596 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:43.596 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:43.596 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:43.596 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:43.596 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:43.596 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:43.596 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:43.596 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:43.596 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:43.597 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:43.597 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:43.597 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:43.597 list of memzone associated elements. size: 607.928894 MiB 00:04:43.597 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:43.597 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.597 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:43.597 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.597 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:43.597 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3848894_0 00:04:43.597 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:43.597 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3848894_0 00:04:43.597 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:43.597 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3848894_0 00:04:43.597 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:43.597 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.597 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:43.597 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.597 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:43.597 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3848894_0 00:04:43.597 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:43.597 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3848894 00:04:43.597 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:43.597 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3848894 00:04:43.597 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:43.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.597 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:43.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.597 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:43.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.597 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:43.597 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.597 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:43.597 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3848894 00:04:43.597 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:43.597 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3848894 00:04:43.597 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:43.597 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3848894 00:04:43.597 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:43.597 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3848894 00:04:43.597 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:43.597 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3848894 00:04:43.597 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:43.597 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3848894 00:04:43.597 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:43.597 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.597 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:43.597 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.597 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:43.597 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.597 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:43.597 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3848894 00:04:43.597 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:43.597 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3848894 00:04:43.597 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:43.597 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.597 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:43.597 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.597 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:43.597 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3848894 00:04:43.597 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:43.597 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.597 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:43.597 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3848894 00:04:43.597 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:43.597 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3848894 00:04:43.597 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:43.597 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3848894 00:04:43.597 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:43.597 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.597 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.597 20:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3848894 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3848894 ']' 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3848894 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848894 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848894' 00:04:43.597 killing process with pid 3848894 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3848894 00:04:43.597 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3848894 00:04:44.163 00:04:44.163 real 0m1.201s 00:04:44.163 user 0m1.182s 00:04:44.163 sys 0m0.458s 00:04:44.163 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.163 20:45:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.163 ************************************ 00:04:44.163 END TEST dpdk_mem_utility 00:04:44.163 ************************************ 00:04:44.163 20:45:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:44.163 20:45:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.163 20:45:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.163 20:45:34 -- common/autotest_common.sh@10 -- # set +x 00:04:44.163 ************************************ 00:04:44.163 START TEST event 00:04:44.163 ************************************ 00:04:44.163 20:45:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:44.163 * Looking for test storage... 00:04:44.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:44.164 20:45:34 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.164 20:45:34 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.164 20:45:34 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.164 20:45:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.164 20:45:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.164 20:45:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.164 20:45:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.164 20:45:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.164 20:45:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.164 20:45:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.164 20:45:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.164 20:45:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.164 20:45:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.164 20:45:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.164 20:45:35 event -- scripts/common.sh@344 -- # case "$op" in 00:04:44.164 20:45:35 event -- scripts/common.sh@345 -- # : 1 00:04:44.164 20:45:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.164 20:45:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.164 20:45:35 event -- scripts/common.sh@365 -- # decimal 1 00:04:44.164 20:45:35 event -- scripts/common.sh@353 -- # local d=1 00:04:44.164 20:45:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.164 20:45:35 event -- scripts/common.sh@355 -- # echo 1 00:04:44.164 20:45:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.164 20:45:35 event -- scripts/common.sh@366 -- # decimal 2 00:04:44.164 20:45:35 event -- scripts/common.sh@353 -- # local d=2 00:04:44.164 20:45:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.164 20:45:35 event -- scripts/common.sh@355 -- # echo 2 00:04:44.164 20:45:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.164 20:45:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.164 20:45:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.164 20:45:35 event -- scripts/common.sh@368 -- # return 0 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.164 --rc genhtml_branch_coverage=1 00:04:44.164 --rc genhtml_function_coverage=1 00:04:44.164 --rc genhtml_legend=1 00:04:44.164 --rc geninfo_all_blocks=1 00:04:44.164 --rc geninfo_unexecuted_blocks=1 00:04:44.164 00:04:44.164 ' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.164 --rc genhtml_branch_coverage=1 00:04:44.164 --rc genhtml_function_coverage=1 00:04:44.164 --rc genhtml_legend=1 00:04:44.164 --rc geninfo_all_blocks=1 00:04:44.164 --rc geninfo_unexecuted_blocks=1 00:04:44.164 00:04:44.164 ' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.164 --rc genhtml_branch_coverage=1 00:04:44.164 --rc genhtml_function_coverage=1 00:04:44.164 --rc genhtml_legend=1 00:04:44.164 --rc geninfo_all_blocks=1 00:04:44.164 --rc geninfo_unexecuted_blocks=1 00:04:44.164 00:04:44.164 ' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.164 --rc genhtml_branch_coverage=1 00:04:44.164 --rc genhtml_function_coverage=1 00:04:44.164 --rc genhtml_legend=1 00:04:44.164 --rc geninfo_all_blocks=1 00:04:44.164 --rc geninfo_unexecuted_blocks=1 00:04:44.164 00:04:44.164 ' 00:04:44.164 20:45:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:44.164 20:45:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.164 20:45:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:44.164 20:45:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.164 20:45:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.164 ************************************ 00:04:44.164 START TEST event_perf 00:04:44.164 ************************************ 00:04:44.164 20:45:35 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.164 Running I/O for 1 seconds...[2024-11-26 20:45:35.090271] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:44.164 [2024-11-26 20:45:35.090336] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849092 ] 00:04:44.423 [2024-11-26 20:45:35.161450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.423 [2024-11-26 20:45:35.228830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.423 [2024-11-26 20:45:35.228887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.423 [2024-11-26 20:45:35.229005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.423 [2024-11-26 20:45:35.229008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.357 Running I/O for 1 seconds... 00:04:45.357 lcore 0: 230686 00:04:45.357 lcore 1: 230685 00:04:45.357 lcore 2: 230685 00:04:45.357 lcore 3: 230684 00:04:45.616 done. 00:04:45.616 00:04:45.616 real 0m1.224s 00:04:45.616 user 0m4.149s 00:04:45.616 sys 0m0.070s 00:04:45.616 20:45:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.616 20:45:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.616 ************************************ 00:04:45.616 END TEST event_perf 00:04:45.616 ************************************ 00:04:45.616 20:45:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.616 20:45:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:45.616 20:45:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.616 20:45:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.616 ************************************ 00:04:45.616 START TEST event_reactor 00:04:45.616 ************************************ 00:04:45.616 20:45:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.616 [2024-11-26 20:45:36.360560] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:45.616 [2024-11-26 20:45:36.360625] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849251 ] 00:04:45.616 [2024-11-26 20:45:36.432394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.616 [2024-11-26 20:45:36.496553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.990 test_start 00:04:46.990 oneshot 00:04:46.990 tick 100 00:04:46.990 tick 100 00:04:46.990 tick 250 00:04:46.990 tick 100 00:04:46.990 tick 100 00:04:46.990 tick 100 00:04:46.990 tick 250 00:04:46.990 tick 500 00:04:46.990 tick 100 00:04:46.990 tick 100 00:04:46.990 tick 250 00:04:46.990 tick 100 00:04:46.990 tick 100 00:04:46.990 test_end 00:04:46.990 00:04:46.990 real 0m1.220s 00:04:46.990 user 0m1.148s 00:04:46.990 sys 0m0.068s 00:04:46.990 20:45:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.990 20:45:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.990 ************************************ 00:04:46.990 END TEST event_reactor 00:04:46.990 ************************************ 00:04:46.990 20:45:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.990 20:45:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.991 20:45:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.991 20:45:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.991 ************************************ 00:04:46.991 START TEST event_reactor_perf 00:04:46.991 ************************************ 00:04:46.991 20:45:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.991 [2024-11-26 20:45:37.628780] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:46.991 [2024-11-26 20:45:37.628844] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849404 ] 00:04:46.991 [2024-11-26 20:45:37.702285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.991 [2024-11-26 20:45:37.762218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.926 test_start 00:04:47.926 test_end 00:04:47.926 Performance: 357715 events per second 00:04:47.926 00:04:47.926 real 0m1.216s 00:04:47.926 user 0m1.142s 00:04:47.926 sys 0m0.069s 00:04:47.926 20:45:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.926 20:45:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.926 ************************************ 00:04:47.926 END TEST event_reactor_perf 00:04:47.926 ************************************ 00:04:47.926 20:45:38 event -- event/event.sh@49 -- # uname -s 00:04:47.926 20:45:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.926 20:45:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.926 20:45:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.926 20:45:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.926 20:45:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.185 ************************************ 00:04:48.185 START TEST event_scheduler 00:04:48.185 ************************************ 00:04:48.185 20:45:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:48.185 * Looking for test storage... 00:04:48.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:48.185 20:45:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.185 20:45:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.185 20:45:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.185 20:45:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.185 --rc genhtml_branch_coverage=1 00:04:48.185 --rc genhtml_function_coverage=1 00:04:48.185 --rc genhtml_legend=1 00:04:48.185 --rc geninfo_all_blocks=1 00:04:48.185 --rc geninfo_unexecuted_blocks=1 00:04:48.185 00:04:48.185 ' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.185 --rc genhtml_branch_coverage=1 00:04:48.185 --rc genhtml_function_coverage=1 00:04:48.185 --rc genhtml_legend=1 00:04:48.185 --rc geninfo_all_blocks=1 00:04:48.185 --rc geninfo_unexecuted_blocks=1 00:04:48.185 00:04:48.185 ' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.185 --rc genhtml_branch_coverage=1 00:04:48.185 --rc genhtml_function_coverage=1 00:04:48.185 --rc genhtml_legend=1 00:04:48.185 --rc geninfo_all_blocks=1 00:04:48.185 --rc geninfo_unexecuted_blocks=1 00:04:48.185 00:04:48.185 ' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.185 --rc genhtml_branch_coverage=1 00:04:48.185 --rc genhtml_function_coverage=1 00:04:48.185 --rc genhtml_legend=1 00:04:48.185 --rc geninfo_all_blocks=1 00:04:48.185 --rc geninfo_unexecuted_blocks=1 00:04:48.185 00:04:48.185 ' 00:04:48.185 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:48.185 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3849603 00:04:48.185 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:48.185 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.185 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3849603 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3849603 ']' 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.185 20:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.185 [2024-11-26 20:45:39.070457] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:48.185 [2024-11-26 20:45:39.070538] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849603 ] 00:04:48.443 [2024-11-26 20:45:39.136670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.443 [2024-11-26 20:45:39.196705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.443 [2024-11-26 20:45:39.196761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.443 [2024-11-26 20:45:39.196827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.443 [2024-11-26 20:45:39.196830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:48.443 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.443 [2024-11-26 20:45:39.301755] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:48.443 [2024-11-26 20:45:39.301783] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.443 [2024-11-26 20:45:39.301802] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.443 [2024-11-26 20:45:39.301813] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.443 [2024-11-26 20:45:39.301823] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.443 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.443 20:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.701 [2024-11-26 20:45:39.402513] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.701 20:45:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.701 20:45:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.701 20:45:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.701 20:45:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.701 20:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.701 ************************************ 00:04:48.701 START TEST scheduler_create_thread 00:04:48.701 ************************************ 00:04:48.701 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:48.701 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 2 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 3 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 4 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 5 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 6 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 7 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 8 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 9 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 10 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.702 20:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.268 20:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.268 20:45:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:49.268 20:45:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:49.268 20:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.268 20:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.641 20:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.641 00:04:50.641 real 0m1.755s 00:04:50.641 user 0m0.010s 00:04:50.641 sys 0m0.006s 00:04:50.641 20:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.641 20:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.641 ************************************ 00:04:50.641 END TEST scheduler_create_thread 00:04:50.641 ************************************ 00:04:50.641 20:45:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:50.641 20:45:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3849603 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3849603 ']' 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3849603 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3849603 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3849603' 00:04:50.641 killing process with pid 3849603 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3849603 00:04:50.641 20:45:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3849603 00:04:50.899 [2024-11-26 20:45:41.666539] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.158 00:04:51.158 real 0m3.004s 00:04:51.158 user 0m4.022s 00:04:51.158 sys 0m0.345s 00:04:51.158 20:45:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.158 20:45:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.158 ************************************ 00:04:51.158 END TEST event_scheduler 00:04:51.158 ************************************ 00:04:51.158 20:45:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.158 20:45:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.158 20:45:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.158 20:45:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.158 20:45:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.158 ************************************ 00:04:51.158 START TEST app_repeat 00:04:51.158 ************************************ 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3850040 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3850040' 00:04:51.158 Process app_repeat pid: 3850040 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.158 spdk_app_start Round 0 00:04:51.158 20:45:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3850040 /var/tmp/spdk-nbd.sock 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3850040 ']' 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.158 20:45:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.158 [2024-11-26 20:45:41.972455] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:04:51.158 [2024-11-26 20:45:41.972519] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850040 ] 00:04:51.158 [2024-11-26 20:45:42.044503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.416 [2024-11-26 20:45:42.110824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.416 [2024-11-26 20:45:42.110830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.416 20:45:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.416 20:45:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.416 20:45:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.674 Malloc0 00:04:51.674 20:45:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.933 Malloc1 00:04:51.933 20:45:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.933 20:45:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.191 /dev/nbd0 00:04:52.449 20:45:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.449 20:45:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.449 1+0 records in 00:04:52.449 1+0 records out 00:04:52.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229215 s, 17.9 MB/s 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.449 20:45:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.449 20:45:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.449 20:45:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.449 20:45:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.708 /dev/nbd1 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.708 1+0 records in 00:04:52.708 1+0 records out 00:04:52.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199382 s, 20.5 MB/s 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.708 20:45:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.708 20:45:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.966 20:45:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.966 { 00:04:52.966 "nbd_device": "/dev/nbd0", 00:04:52.966 "bdev_name": "Malloc0" 00:04:52.966 }, 00:04:52.966 { 00:04:52.966 "nbd_device": "/dev/nbd1", 00:04:52.966 "bdev_name": "Malloc1" 00:04:52.966 } 00:04:52.966 ]' 00:04:52.966 20:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.966 { 00:04:52.966 "nbd_device": "/dev/nbd0", 00:04:52.966 "bdev_name": "Malloc0" 00:04:52.966 }, 00:04:52.966 { 00:04:52.966 "nbd_device": "/dev/nbd1", 00:04:52.966 "bdev_name": "Malloc1" 00:04:52.966 } 00:04:52.966 ]' 00:04:52.966 20:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.966 20:45:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.967 /dev/nbd1' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.967 /dev/nbd1' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.967 256+0 records in 00:04:52.967 256+0 records out 00:04:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487833 s, 215 MB/s 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.967 256+0 records in 00:04:52.967 256+0 records out 00:04:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225469 s, 46.5 MB/s 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.967 256+0 records in 00:04:52.967 256+0 records out 00:04:52.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238098 s, 44.0 MB/s 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.967 20:45:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.534 20:45:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.792 20:45:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.050 20:45:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.050 20:45:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.309 20:45:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.567 [2024-11-26 20:45:45.327461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.567 [2024-11-26 20:45:45.388528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.567 [2024-11-26 20:45:45.388528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.567 [2024-11-26 20:45:45.445431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.567 [2024-11-26 20:45:45.445500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.891 20:45:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.892 20:45:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:57.892 spdk_app_start Round 1 00:04:57.892 20:45:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3850040 /var/tmp/spdk-nbd.sock 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3850040 ']' 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.892 20:45:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.892 20:45:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.892 Malloc0 00:04:57.892 20:45:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.172 Malloc1 00:04:58.172 20:45:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.172 20:45:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.430 /dev/nbd0 00:04:58.430 20:45:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.430 20:45:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.430 1+0 records in 00:04:58.430 1+0 records out 00:04:58.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024018 s, 17.1 MB/s 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.430 20:45:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.430 20:45:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.430 20:45:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.430 20:45:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.688 /dev/nbd1 00:04:58.688 20:45:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.688 20:45:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.688 1+0 records in 00:04:58.688 1+0 records out 00:04:58.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158059 s, 25.9 MB/s 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.688 20:45:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.688 20:45:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.688 20:45:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.689 20:45:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.689 20:45:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.947 20:45:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.204 20:45:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.204 { 00:04:59.204 "nbd_device": "/dev/nbd0", 00:04:59.204 "bdev_name": "Malloc0" 00:04:59.204 }, 00:04:59.204 { 00:04:59.204 "nbd_device": "/dev/nbd1", 00:04:59.204 "bdev_name": "Malloc1" 00:04:59.204 } 00:04:59.204 ]' 00:04:59.204 20:45:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.204 { 00:04:59.204 "nbd_device": "/dev/nbd0", 00:04:59.204 "bdev_name": "Malloc0" 00:04:59.204 }, 00:04:59.204 { 00:04:59.204 "nbd_device": "/dev/nbd1", 00:04:59.204 "bdev_name": "Malloc1" 00:04:59.204 } 00:04:59.204 ]' 00:04:59.204 20:45:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.204 20:45:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.204 /dev/nbd1' 00:04:59.204 20:45:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.204 /dev/nbd1' 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.205 256+0 records in 00:04:59.205 256+0 records out 00:04:59.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506003 s, 207 MB/s 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.205 256+0 records in 00:04:59.205 256+0 records out 00:04:59.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228196 s, 46.0 MB/s 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.205 256+0 records in 00:04:59.205 256+0 records out 00:04:59.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220148 s, 47.6 MB/s 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.205 20:45:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.205 20:45:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.463 20:45:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.720 20:45:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.978 20:45:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.978 20:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.978 20:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.236 20:45:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.236 20:45:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.492 20:45:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.492 [2024-11-26 20:45:51.426527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.750 [2024-11-26 20:45:51.488362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.750 [2024-11-26 20:45:51.488362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.750 [2024-11-26 20:45:51.551598] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.750 [2024-11-26 20:45:51.551698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.318 20:45:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.318 20:45:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:03.318 spdk_app_start Round 2 00:05:03.318 20:45:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3850040 /var/tmp/spdk-nbd.sock 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3850040 ']' 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.318 20:45:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.575 20:45:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.575 20:45:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:03.575 20:45:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.834 Malloc0 00:05:03.834 20:45:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.093 Malloc1 00:05:04.093 20:45:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.093 20:45:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.093 20:45:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.093 20:45:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.093 20:45:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.093 20:45:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.350 20:45:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.609 /dev/nbd0 00:05:04.609 20:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.609 20:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.609 1+0 records in 00:05:04.609 1+0 records out 00:05:04.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207492 s, 19.7 MB/s 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.609 20:45:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.609 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.609 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.609 20:45:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.867 /dev/nbd1 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.867 1+0 records in 00:05:04.867 1+0 records out 00:05:04.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216308 s, 18.9 MB/s 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.867 20:45:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.867 20:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.125 20:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.125 { 00:05:05.125 "nbd_device": "/dev/nbd0", 00:05:05.125 "bdev_name": "Malloc0" 00:05:05.125 }, 00:05:05.125 { 00:05:05.125 "nbd_device": "/dev/nbd1", 00:05:05.125 "bdev_name": "Malloc1" 00:05:05.125 } 00:05:05.125 ]' 00:05:05.125 20:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.125 { 00:05:05.125 "nbd_device": "/dev/nbd0", 00:05:05.125 "bdev_name": "Malloc0" 00:05:05.125 }, 00:05:05.125 { 00:05:05.125 "nbd_device": "/dev/nbd1", 00:05:05.125 "bdev_name": "Malloc1" 00:05:05.125 } 00:05:05.125 ]' 00:05:05.125 20:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.125 /dev/nbd1' 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.125 /dev/nbd1' 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.125 20:45:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.126 20:45:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.126 256+0 records in 00:05:05.126 256+0 records out 00:05:05.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514081 s, 204 MB/s 00:05:05.126 20:45:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.126 20:45:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.126 256+0 records in 00:05:05.126 256+0 records out 00:05:05.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019854 s, 52.8 MB/s 00:05:05.126 20:45:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.126 20:45:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.383 256+0 records in 00:05:05.383 256+0 records out 00:05:05.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240343 s, 43.6 MB/s 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.383 20:45:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.640 20:45:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.897 20:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.897 20:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.897 20:45:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.897 20:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.898 20:45:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.155 20:45:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.155 20:45:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.413 20:45:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.672 [2024-11-26 20:45:57.504581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.672 [2024-11-26 20:45:57.564767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.672 [2024-11-26 20:45:57.564772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.930 [2024-11-26 20:45:57.627281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.930 [2024-11-26 20:45:57.627371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.458 20:46:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3850040 /var/tmp/spdk-nbd.sock 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3850040 ']' 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.458 20:46:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.716 20:46:00 event.app_repeat -- event/event.sh@39 -- # killprocess 3850040 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3850040 ']' 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3850040 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3850040 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3850040' 00:05:09.716 killing process with pid 3850040 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3850040 00:05:09.716 20:46:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3850040 00:05:09.974 spdk_app_start is called in Round 0. 00:05:09.974 Shutdown signal received, stop current app iteration 00:05:09.974 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:05:09.974 spdk_app_start is called in Round 1. 00:05:09.974 Shutdown signal received, stop current app iteration 00:05:09.974 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:05:09.974 spdk_app_start is called in Round 2. 00:05:09.974 Shutdown signal received, stop current app iteration 00:05:09.974 Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 reinitialization... 00:05:09.974 spdk_app_start is called in Round 3. 00:05:09.974 Shutdown signal received, stop current app iteration 00:05:09.974 20:46:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.974 20:46:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.974 00:05:09.974 real 0m18.851s 00:05:09.974 user 0m41.614s 00:05:09.974 sys 0m3.249s 00:05:09.974 20:46:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.974 20:46:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 END TEST app_repeat 00:05:09.974 ************************************ 00:05:09.974 20:46:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.974 20:46:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.974 20:46:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.974 20:46:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.974 20:46:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 START TEST cpu_locks 00:05:09.974 ************************************ 00:05:09.974 20:46:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.974 * Looking for test storage... 00:05:09.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:09.974 20:46:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.974 20:46:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.974 20:46:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.233 20:46:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.233 20:46:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.234 --rc genhtml_branch_coverage=1 00:05:10.234 --rc genhtml_function_coverage=1 00:05:10.234 --rc genhtml_legend=1 00:05:10.234 --rc geninfo_all_blocks=1 00:05:10.234 --rc geninfo_unexecuted_blocks=1 00:05:10.234 00:05:10.234 ' 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.234 --rc genhtml_branch_coverage=1 00:05:10.234 --rc genhtml_function_coverage=1 00:05:10.234 --rc genhtml_legend=1 00:05:10.234 --rc geninfo_all_blocks=1 00:05:10.234 --rc geninfo_unexecuted_blocks=1 00:05:10.234 00:05:10.234 ' 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.234 --rc genhtml_branch_coverage=1 00:05:10.234 --rc genhtml_function_coverage=1 00:05:10.234 --rc genhtml_legend=1 00:05:10.234 --rc geninfo_all_blocks=1 00:05:10.234 --rc geninfo_unexecuted_blocks=1 00:05:10.234 00:05:10.234 ' 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.234 --rc genhtml_branch_coverage=1 00:05:10.234 --rc genhtml_function_coverage=1 00:05:10.234 --rc genhtml_legend=1 00:05:10.234 --rc geninfo_all_blocks=1 00:05:10.234 --rc geninfo_unexecuted_blocks=1 00:05:10.234 00:05:10.234 ' 00:05:10.234 20:46:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:10.234 20:46:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:10.234 20:46:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:10.234 20:46:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.234 20:46:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.234 ************************************ 00:05:10.234 START TEST default_locks 00:05:10.234 ************************************ 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3852532 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3852532 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3852532 ']' 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.234 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.234 [2024-11-26 20:46:01.074293] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:10.234 [2024-11-26 20:46:01.074388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852532 ] 00:05:10.234 [2024-11-26 20:46:01.141091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.493 [2024-11-26 20:46:01.204461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.751 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.751 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:10.751 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3852532 00:05:10.751 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3852532 00:05:10.751 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.009 lslocks: write error 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3852532 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3852532 ']' 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3852532 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852532 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852532' 00:05:11.009 killing process with pid 3852532 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3852532 00:05:11.009 20:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3852532 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3852532 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3852532 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3852532 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3852532 ']' 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.575 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3852532) - No such process 00:05:11.576 ERROR: process (pid: 3852532) is no longer running 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.576 00:05:11.576 real 0m1.218s 00:05:11.576 user 0m1.182s 00:05:11.576 sys 0m0.512s 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.576 20:46:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 ************************************ 00:05:11.576 END TEST default_locks 00:05:11.576 ************************************ 00:05:11.576 20:46:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.576 20:46:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.576 20:46:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.576 20:46:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 ************************************ 00:05:11.576 START TEST default_locks_via_rpc 00:05:11.576 ************************************ 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3852702 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3852702 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3852702 ']' 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.576 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.576 [2024-11-26 20:46:02.339027] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:11.576 [2024-11-26 20:46:02.339124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852702 ] 00:05:11.576 [2024-11-26 20:46:02.404363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.576 [2024-11-26 20:46:02.463357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3852702 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3852702 00:05:11.835 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3852702 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3852702 ']' 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3852702 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.093 20:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852702 00:05:12.093 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.093 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.093 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852702' 00:05:12.093 killing process with pid 3852702 00:05:12.093 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3852702 00:05:12.094 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3852702 00:05:12.660 00:05:12.660 real 0m1.174s 00:05:12.660 user 0m1.109s 00:05:12.660 sys 0m0.544s 00:05:12.660 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.660 20:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.660 ************************************ 00:05:12.660 END TEST default_locks_via_rpc 00:05:12.660 ************************************ 00:05:12.660 20:46:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.660 20:46:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.660 20:46:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.660 20:46:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.660 ************************************ 00:05:12.660 START TEST non_locking_app_on_locked_coremask 00:05:12.660 ************************************ 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3852862 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3852862 /var/tmp/spdk.sock 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3852862 ']' 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.660 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.660 [2024-11-26 20:46:03.560617] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:12.660 [2024-11-26 20:46:03.560735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852862 ] 00:05:12.919 [2024-11-26 20:46:03.627411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.919 [2024-11-26 20:46:03.686482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3852870 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3852870 /var/tmp/spdk2.sock 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3852870 ']' 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.177 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.178 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.178 20:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.178 [2024-11-26 20:46:04.027043] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:13.178 [2024-11-26 20:46:04.027125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852870 ] 00:05:13.436 [2024-11-26 20:46:04.142845] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.436 [2024-11-26 20:46:04.142877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.436 [2024-11-26 20:46:04.269905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.370 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.370 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.370 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3852862 00:05:14.370 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3852862 00:05:14.370 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.628 lslocks: write error 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3852862 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3852862 ']' 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3852862 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852862 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852862' 00:05:14.628 killing process with pid 3852862 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3852862 00:05:14.628 20:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3852862 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3852870 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3852870 ']' 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3852870 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852870 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852870' 00:05:15.563 killing process with pid 3852870 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3852870 00:05:15.563 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3852870 00:05:16.129 00:05:16.129 real 0m3.317s 00:05:16.129 user 0m3.529s 00:05:16.129 sys 0m1.052s 00:05:16.129 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.129 20:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.129 ************************************ 00:05:16.129 END TEST non_locking_app_on_locked_coremask 00:05:16.129 ************************************ 00:05:16.129 20:46:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:16.129 20:46:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.129 20:46:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.129 20:46:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.129 ************************************ 00:05:16.129 START TEST locking_app_on_unlocked_coremask 00:05:16.129 ************************************ 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3853296 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3853296 /var/tmp/spdk.sock 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3853296 ']' 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.129 20:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.129 [2024-11-26 20:46:06.933854] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:16.129 [2024-11-26 20:46:06.933958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853296 ] 00:05:16.129 [2024-11-26 20:46:07.007783] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.129 [2024-11-26 20:46:07.007833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.387 [2024-11-26 20:46:07.068579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3853309 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3853309 /var/tmp/spdk2.sock 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3853309 ']' 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.646 20:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.646 [2024-11-26 20:46:07.406556] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:16.646 [2024-11-26 20:46:07.406639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853309 ] 00:05:16.646 [2024-11-26 20:46:07.512187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.904 [2024-11-26 20:46:07.626336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.470 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.470 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.470 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3853309 00:05:17.470 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3853309 00:05:17.470 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.036 lslocks: write error 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3853296 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3853296 ']' 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3853296 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3853296 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3853296' 00:05:18.036 killing process with pid 3853296 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3853296 00:05:18.036 20:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3853296 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3853309 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3853309 ']' 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3853309 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3853309 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3853309' 00:05:18.970 killing process with pid 3853309 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3853309 00:05:18.970 20:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3853309 00:05:19.537 00:05:19.537 real 0m3.357s 00:05:19.537 user 0m3.572s 00:05:19.537 sys 0m1.059s 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 ************************************ 00:05:19.537 END TEST locking_app_on_unlocked_coremask 00:05:19.537 ************************************ 00:05:19.537 20:46:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:19.537 20:46:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.537 20:46:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.537 20:46:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 ************************************ 00:05:19.537 START TEST locking_app_on_locked_coremask 00:05:19.537 ************************************ 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3853736 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3853736 /var/tmp/spdk.sock 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3853736 ']' 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.537 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 [2024-11-26 20:46:10.338494] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:19.537 [2024-11-26 20:46:10.338591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853736 ] 00:05:19.537 [2024-11-26 20:46:10.403886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.537 [2024-11-26 20:46:10.462736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3853762 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3853762 /var/tmp/spdk2.sock 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3853762 /var/tmp/spdk2.sock 00:05:20.103 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3853762 /var/tmp/spdk2.sock 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3853762 ']' 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.104 20:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.104 [2024-11-26 20:46:10.809179] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:20.104 [2024-11-26 20:46:10.809272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853762 ] 00:05:20.104 [2024-11-26 20:46:10.927495] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3853736 has claimed it. 00:05:20.104 [2024-11-26 20:46:10.927560] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3853762) - No such process 00:05:20.667 ERROR: process (pid: 3853762) is no longer running 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3853736 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3853736 00:05:20.667 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.925 lslocks: write error 00:05:20.925 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3853736 00:05:20.925 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3853736 ']' 00:05:20.925 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3853736 00:05:20.925 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.925 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3853736 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3853736' 00:05:21.183 killing process with pid 3853736 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3853736 00:05:21.183 20:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3853736 00:05:21.441 00:05:21.441 real 0m2.055s 00:05:21.441 user 0m2.255s 00:05:21.441 sys 0m0.672s 00:05:21.441 20:46:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.441 20:46:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.441 ************************************ 00:05:21.441 END TEST locking_app_on_locked_coremask 00:05:21.441 ************************************ 00:05:21.441 20:46:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:21.441 20:46:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.441 20:46:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.441 20:46:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.699 ************************************ 00:05:21.699 START TEST locking_overlapped_coremask 00:05:21.699 ************************************ 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3854034 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3854034 /var/tmp/spdk.sock 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3854034 ']' 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.699 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.700 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.700 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.700 [2024-11-26 20:46:12.444619] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:21.700 [2024-11-26 20:46:12.444727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854034 ] 00:05:21.700 [2024-11-26 20:46:12.516829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.700 [2024-11-26 20:46:12.580549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.700 [2024-11-26 20:46:12.580616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.700 [2024-11-26 20:46:12.580619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3854039 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3854039 /var/tmp/spdk2.sock 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3854039 /var/tmp/spdk2.sock 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3854039 /var/tmp/spdk2.sock 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3854039 ']' 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.958 20:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.215 [2024-11-26 20:46:12.916680] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:22.215 [2024-11-26 20:46:12.916768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854039 ] 00:05:22.215 [2024-11-26 20:46:13.023274] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3854034 has claimed it. 00:05:22.215 [2024-11-26 20:46:13.023336] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3854039) - No such process 00:05:22.782 ERROR: process (pid: 3854039) is no longer running 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3854034 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3854034 ']' 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3854034 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854034 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854034' 00:05:22.782 killing process with pid 3854034 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3854034 00:05:22.782 20:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3854034 00:05:23.412 00:05:23.412 real 0m1.733s 00:05:23.412 user 0m4.805s 00:05:23.412 sys 0m0.475s 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.412 ************************************ 00:05:23.412 END TEST locking_overlapped_coremask 00:05:23.412 ************************************ 00:05:23.412 20:46:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:23.412 20:46:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.412 20:46:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.412 20:46:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.412 ************************************ 00:05:23.412 START TEST locking_overlapped_coremask_via_rpc 00:05:23.412 ************************************ 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3854294 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3854294 /var/tmp/spdk.sock 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3854294 ']' 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.412 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.412 [2024-11-26 20:46:14.224440] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:23.412 [2024-11-26 20:46:14.224529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854294 ] 00:05:23.412 [2024-11-26 20:46:14.290055] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.412 [2024-11-26 20:46:14.290101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.696 [2024-11-26 20:46:14.355174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.696 [2024-11-26 20:46:14.355239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.696 [2024-11-26 20:46:14.355243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3854342 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3854342 /var/tmp/spdk2.sock 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3854342 ']' 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.954 20:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.954 [2024-11-26 20:46:14.695774] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:23.954 [2024-11-26 20:46:14.695857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854342 ] 00:05:23.954 [2024-11-26 20:46:14.799112] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.954 [2024-11-26 20:46:14.799148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.212 [2024-11-26 20:46:14.920305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.212 [2024-11-26 20:46:14.923747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.212 [2024-11-26 20:46:14.923749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.778 [2024-11-26 20:46:15.683781] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3854294 has claimed it. 00:05:24.778 request: 00:05:24.778 { 00:05:24.778 "method": "framework_enable_cpumask_locks", 00:05:24.778 "req_id": 1 00:05:24.778 } 00:05:24.778 Got JSON-RPC error response 00:05:24.778 response: 00:05:24.778 { 00:05:24.778 "code": -32603, 00:05:24.778 "message": "Failed to claim CPU core: 2" 00:05:24.778 } 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3854294 /var/tmp/spdk.sock 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3854294 ']' 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.778 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3854342 /var/tmp/spdk2.sock 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3854342 ']' 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.036 20:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.600 00:05:25.600 real 0m2.070s 00:05:25.600 user 0m1.127s 00:05:25.600 sys 0m0.180s 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.600 20:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.600 ************************************ 00:05:25.600 END TEST locking_overlapped_coremask_via_rpc 00:05:25.600 ************************************ 00:05:25.600 20:46:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:25.600 20:46:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3854294 ]] 00:05:25.600 20:46:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3854294 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3854294 ']' 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3854294 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854294 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854294' 00:05:25.600 killing process with pid 3854294 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3854294 00:05:25.600 20:46:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3854294 00:05:25.858 20:46:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3854342 ]] 00:05:25.858 20:46:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3854342 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3854342 ']' 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3854342 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854342 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854342' 00:05:25.858 killing process with pid 3854342 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3854342 00:05:25.858 20:46:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3854342 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3854294 ]] 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3854294 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3854294 ']' 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3854294 00:05:26.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3854294) - No such process 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3854294 is not found' 00:05:26.424 Process with pid 3854294 is not found 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3854342 ]] 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3854342 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3854342 ']' 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3854342 00:05:26.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3854342) - No such process 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3854342 is not found' 00:05:26.424 Process with pid 3854342 is not found 00:05:26.424 20:46:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.424 00:05:26.424 real 0m16.362s 00:05:26.424 user 0m29.316s 00:05:26.424 sys 0m5.450s 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.424 20:46:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.424 ************************************ 00:05:26.424 END TEST cpu_locks 00:05:26.424 ************************************ 00:05:26.424 00:05:26.424 real 0m42.331s 00:05:26.424 user 1m21.594s 00:05:26.424 sys 0m9.527s 00:05:26.424 20:46:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.424 20:46:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.424 ************************************ 00:05:26.424 END TEST event 00:05:26.424 ************************************ 00:05:26.424 20:46:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:26.424 20:46:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.424 20:46:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.424 20:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.424 ************************************ 00:05:26.424 START TEST thread 00:05:26.424 ************************************ 00:05:26.424 20:46:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:26.424 * Looking for test storage... 00:05:26.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:26.424 20:46:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.424 20:46:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.424 20:46:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.683 20:46:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.683 20:46:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.683 20:46:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.683 20:46:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.683 20:46:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.683 20:46:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.683 20:46:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.683 20:46:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.683 20:46:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.683 20:46:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.683 20:46:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.683 20:46:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:26.683 20:46:17 thread -- scripts/common.sh@345 -- # : 1 00:05:26.683 20:46:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.683 20:46:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.683 20:46:17 thread -- scripts/common.sh@365 -- # decimal 1 00:05:26.683 20:46:17 thread -- scripts/common.sh@353 -- # local d=1 00:05:26.683 20:46:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.683 20:46:17 thread -- scripts/common.sh@355 -- # echo 1 00:05:26.683 20:46:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.683 20:46:17 thread -- scripts/common.sh@366 -- # decimal 2 00:05:26.683 20:46:17 thread -- scripts/common.sh@353 -- # local d=2 00:05:26.683 20:46:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.683 20:46:17 thread -- scripts/common.sh@355 -- # echo 2 00:05:26.683 20:46:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.683 20:46:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.683 20:46:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.683 20:46:17 thread -- scripts/common.sh@368 -- # return 0 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.683 --rc genhtml_branch_coverage=1 00:05:26.683 --rc genhtml_function_coverage=1 00:05:26.683 --rc genhtml_legend=1 00:05:26.683 --rc geninfo_all_blocks=1 00:05:26.683 --rc geninfo_unexecuted_blocks=1 00:05:26.683 00:05:26.683 ' 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.683 --rc genhtml_branch_coverage=1 00:05:26.683 --rc genhtml_function_coverage=1 00:05:26.683 --rc genhtml_legend=1 00:05:26.683 --rc geninfo_all_blocks=1 00:05:26.683 --rc geninfo_unexecuted_blocks=1 00:05:26.683 00:05:26.683 ' 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.683 --rc genhtml_branch_coverage=1 00:05:26.683 --rc genhtml_function_coverage=1 00:05:26.683 --rc genhtml_legend=1 00:05:26.683 --rc geninfo_all_blocks=1 00:05:26.683 --rc geninfo_unexecuted_blocks=1 00:05:26.683 00:05:26.683 ' 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.683 --rc genhtml_branch_coverage=1 00:05:26.683 --rc genhtml_function_coverage=1 00:05:26.683 --rc genhtml_legend=1 00:05:26.683 --rc geninfo_all_blocks=1 00:05:26.683 --rc geninfo_unexecuted_blocks=1 00:05:26.683 00:05:26.683 ' 00:05:26.683 20:46:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.683 20:46:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:26.684 20:46:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.684 20:46:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.684 ************************************ 00:05:26.684 START TEST thread_poller_perf 00:05:26.684 ************************************ 00:05:26.684 20:46:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.684 [2024-11-26 20:46:17.457261] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:26.684 [2024-11-26 20:46:17.457331] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854731 ] 00:05:26.684 [2024-11-26 20:46:17.531293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.684 [2024-11-26 20:46:17.592736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.684 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:28.057 [2024-11-26T19:46:18.995Z] ====================================== 00:05:28.057 [2024-11-26T19:46:18.995Z] busy:2716249895 (cyc) 00:05:28.057 [2024-11-26T19:46:18.995Z] total_run_count: 292000 00:05:28.057 [2024-11-26T19:46:18.995Z] tsc_hz: 2700000000 (cyc) 00:05:28.057 [2024-11-26T19:46:18.995Z] ====================================== 00:05:28.057 [2024-11-26T19:46:18.995Z] poller_cost: 9302 (cyc), 3445 (nsec) 00:05:28.057 00:05:28.057 real 0m1.227s 00:05:28.057 user 0m1.145s 00:05:28.057 sys 0m0.077s 00:05:28.057 20:46:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.057 20:46:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.057 ************************************ 00:05:28.057 END TEST thread_poller_perf 00:05:28.057 ************************************ 00:05:28.057 20:46:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.057 20:46:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:28.057 20:46:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.057 20:46:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.057 ************************************ 00:05:28.057 START TEST thread_poller_perf 00:05:28.057 ************************************ 00:05:28.057 20:46:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.057 [2024-11-26 20:46:18.726417] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:28.057 [2024-11-26 20:46:18.726471] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854993 ] 00:05:28.057 [2024-11-26 20:46:18.797370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.057 [2024-11-26 20:46:18.860834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.057 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.430 [2024-11-26T19:46:20.368Z] ====================================== 00:05:29.430 [2024-11-26T19:46:20.368Z] busy:2702667486 (cyc) 00:05:29.430 [2024-11-26T19:46:20.368Z] total_run_count: 3855000 00:05:29.430 [2024-11-26T19:46:20.368Z] tsc_hz: 2700000000 (cyc) 00:05:29.430 [2024-11-26T19:46:20.368Z] ====================================== 00:05:29.430 [2024-11-26T19:46:20.368Z] poller_cost: 701 (cyc), 259 (nsec) 00:05:29.430 00:05:29.430 real 0m1.217s 00:05:29.430 user 0m1.148s 00:05:29.430 sys 0m0.064s 00:05:29.430 20:46:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.430 20:46:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 ************************************ 00:05:29.430 END TEST thread_poller_perf 00:05:29.430 ************************************ 00:05:29.430 20:46:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:29.430 00:05:29.430 real 0m2.674s 00:05:29.430 user 0m2.418s 00:05:29.430 sys 0m0.260s 00:05:29.430 20:46:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.430 20:46:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 ************************************ 00:05:29.430 END TEST thread 00:05:29.430 ************************************ 00:05:29.430 20:46:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:29.430 20:46:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.430 20:46:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.430 20:46:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.430 20:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:29.430 ************************************ 00:05:29.430 START TEST app_cmdline 00:05:29.430 ************************************ 00:05:29.430 20:46:19 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.430 * Looking for test storage... 00:05:29.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:29.430 20:46:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.430 20:46:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.430 20:46:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.430 20:46:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.430 20:46:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.431 20:46:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.431 --rc genhtml_branch_coverage=1 00:05:29.431 --rc genhtml_function_coverage=1 00:05:29.431 --rc genhtml_legend=1 00:05:29.431 --rc geninfo_all_blocks=1 00:05:29.431 --rc geninfo_unexecuted_blocks=1 00:05:29.431 00:05:29.431 ' 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.431 --rc genhtml_branch_coverage=1 00:05:29.431 --rc genhtml_function_coverage=1 00:05:29.431 --rc genhtml_legend=1 00:05:29.431 --rc geninfo_all_blocks=1 00:05:29.431 --rc geninfo_unexecuted_blocks=1 00:05:29.431 00:05:29.431 ' 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.431 --rc genhtml_branch_coverage=1 00:05:29.431 --rc genhtml_function_coverage=1 00:05:29.431 --rc genhtml_legend=1 00:05:29.431 --rc geninfo_all_blocks=1 00:05:29.431 --rc geninfo_unexecuted_blocks=1 00:05:29.431 00:05:29.431 ' 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.431 --rc genhtml_branch_coverage=1 00:05:29.431 --rc genhtml_function_coverage=1 00:05:29.431 --rc genhtml_legend=1 00:05:29.431 --rc geninfo_all_blocks=1 00:05:29.431 --rc geninfo_unexecuted_blocks=1 00:05:29.431 00:05:29.431 ' 00:05:29.431 20:46:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.431 20:46:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3855197 00:05:29.431 20:46:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.431 20:46:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3855197 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3855197 ']' 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.431 20:46:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.431 [2024-11-26 20:46:20.193036] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:29.431 [2024-11-26 20:46:20.193131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855197 ] 00:05:29.431 [2024-11-26 20:46:20.260224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.431 [2024-11-26 20:46:20.318175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.689 20:46:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.689 20:46:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:29.689 20:46:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:29.947 { 00:05:29.947 "version": "SPDK v25.01-pre git sha1 e43b3b914", 00:05:29.947 "fields": { 00:05:29.947 "major": 25, 00:05:29.947 "minor": 1, 00:05:29.947 "patch": 0, 00:05:29.947 "suffix": "-pre", 00:05:29.947 "commit": "e43b3b914" 00:05:29.947 } 00:05:29.947 } 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:29.947 20:46:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.947 20:46:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.947 20:46:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.206 20:46:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.206 20:46:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.206 20:46:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:30.206 20:46:20 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.465 request: 00:05:30.465 { 00:05:30.465 "method": "env_dpdk_get_mem_stats", 00:05:30.465 "req_id": 1 00:05:30.465 } 00:05:30.465 Got JSON-RPC error response 00:05:30.465 response: 00:05:30.465 { 00:05:30.465 "code": -32601, 00:05:30.465 "message": "Method not found" 00:05:30.465 } 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.465 20:46:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3855197 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3855197 ']' 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3855197 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3855197 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3855197' 00:05:30.465 killing process with pid 3855197 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 3855197 00:05:30.465 20:46:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 3855197 00:05:31.032 00:05:31.032 real 0m1.687s 00:05:31.032 user 0m2.069s 00:05:31.032 sys 0m0.503s 00:05:31.032 20:46:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.032 20:46:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.032 ************************************ 00:05:31.032 END TEST app_cmdline 00:05:31.032 ************************************ 00:05:31.032 20:46:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:31.032 20:46:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.032 20:46:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.032 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:31.032 ************************************ 00:05:31.032 START TEST version 00:05:31.032 ************************************ 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:31.032 * Looking for test storage... 00:05:31.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.032 20:46:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.032 20:46:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.032 20:46:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.032 20:46:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.032 20:46:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.032 20:46:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.032 20:46:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.032 20:46:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.032 20:46:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.032 20:46:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.032 20:46:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.032 20:46:21 version -- scripts/common.sh@344 -- # case "$op" in 00:05:31.032 20:46:21 version -- scripts/common.sh@345 -- # : 1 00:05:31.032 20:46:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.032 20:46:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.032 20:46:21 version -- scripts/common.sh@365 -- # decimal 1 00:05:31.032 20:46:21 version -- scripts/common.sh@353 -- # local d=1 00:05:31.032 20:46:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.032 20:46:21 version -- scripts/common.sh@355 -- # echo 1 00:05:31.032 20:46:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.032 20:46:21 version -- scripts/common.sh@366 -- # decimal 2 00:05:31.032 20:46:21 version -- scripts/common.sh@353 -- # local d=2 00:05:31.032 20:46:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.032 20:46:21 version -- scripts/common.sh@355 -- # echo 2 00:05:31.032 20:46:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.032 20:46:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.032 20:46:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.032 20:46:21 version -- scripts/common.sh@368 -- # return 0 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.032 --rc genhtml_branch_coverage=1 00:05:31.032 --rc genhtml_function_coverage=1 00:05:31.032 --rc genhtml_legend=1 00:05:31.032 --rc geninfo_all_blocks=1 00:05:31.032 --rc geninfo_unexecuted_blocks=1 00:05:31.032 00:05:31.032 ' 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.032 --rc genhtml_branch_coverage=1 00:05:31.032 --rc genhtml_function_coverage=1 00:05:31.032 --rc genhtml_legend=1 00:05:31.032 --rc geninfo_all_blocks=1 00:05:31.032 --rc geninfo_unexecuted_blocks=1 00:05:31.032 00:05:31.032 ' 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.032 --rc genhtml_branch_coverage=1 00:05:31.032 --rc genhtml_function_coverage=1 00:05:31.032 --rc genhtml_legend=1 00:05:31.032 --rc geninfo_all_blocks=1 00:05:31.032 --rc geninfo_unexecuted_blocks=1 00:05:31.032 00:05:31.032 ' 00:05:31.032 20:46:21 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.032 --rc genhtml_branch_coverage=1 00:05:31.032 --rc genhtml_function_coverage=1 00:05:31.032 --rc genhtml_legend=1 00:05:31.032 --rc geninfo_all_blocks=1 00:05:31.032 --rc geninfo_unexecuted_blocks=1 00:05:31.032 00:05:31.032 ' 00:05:31.032 20:46:21 version -- app/version.sh@17 -- # get_header_version major 00:05:31.032 20:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.032 20:46:21 version -- app/version.sh@14 -- # cut -f2 00:05:31.032 20:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.032 20:46:21 version -- app/version.sh@17 -- # major=25 00:05:31.032 20:46:21 version -- app/version.sh@18 -- # get_header_version minor 00:05:31.032 20:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.032 20:46:21 version -- app/version.sh@14 -- # cut -f2 00:05:31.032 20:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.032 20:46:21 version -- app/version.sh@18 -- # minor=1 00:05:31.032 20:46:21 version -- app/version.sh@19 -- # get_header_version patch 00:05:31.033 20:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.033 20:46:21 version -- app/version.sh@14 -- # cut -f2 00:05:31.033 20:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.033 20:46:21 version -- app/version.sh@19 -- # patch=0 00:05:31.033 20:46:21 version -- app/version.sh@20 -- # get_header_version suffix 00:05:31.033 20:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:31.033 20:46:21 version -- app/version.sh@14 -- # cut -f2 00:05:31.033 20:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.033 20:46:21 version -- app/version.sh@20 -- # suffix=-pre 00:05:31.033 20:46:21 version -- app/version.sh@22 -- # version=25.1 00:05:31.033 20:46:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.033 20:46:21 version -- app/version.sh@28 -- # version=25.1rc0 00:05:31.033 20:46:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:31.033 20:46:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.033 20:46:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:31.033 20:46:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:31.033 00:05:31.033 real 0m0.198s 00:05:31.033 user 0m0.131s 00:05:31.033 sys 0m0.092s 00:05:31.033 20:46:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.033 20:46:21 version -- common/autotest_common.sh@10 -- # set +x 00:05:31.033 ************************************ 00:05:31.033 END TEST version 00:05:31.033 ************************************ 00:05:31.033 20:46:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:31.033 20:46:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:31.033 20:46:21 -- spdk/autotest.sh@194 -- # uname -s 00:05:31.033 20:46:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:31.033 20:46:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:31.033 20:46:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:31.033 20:46:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:31.033 20:46:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:31.033 20:46:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:31.033 20:46:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.033 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:31.291 20:46:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:31.291 20:46:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:31.291 20:46:21 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:31.291 20:46:21 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:31.291 20:46:21 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:31.291 20:46:21 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:31.291 20:46:21 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.291 20:46:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:31.291 20:46:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.291 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:31.291 ************************************ 00:05:31.291 START TEST nvmf_tcp 00:05:31.291 ************************************ 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:31.291 * Looking for test storage... 00:05:31.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.291 20:46:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.291 --rc genhtml_branch_coverage=1 00:05:31.291 --rc genhtml_function_coverage=1 00:05:31.291 --rc genhtml_legend=1 00:05:31.291 --rc geninfo_all_blocks=1 00:05:31.291 --rc geninfo_unexecuted_blocks=1 00:05:31.291 00:05:31.291 ' 00:05:31.291 20:46:22 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.291 --rc genhtml_branch_coverage=1 00:05:31.291 --rc genhtml_function_coverage=1 00:05:31.291 --rc genhtml_legend=1 00:05:31.291 --rc geninfo_all_blocks=1 00:05:31.291 --rc geninfo_unexecuted_blocks=1 00:05:31.292 00:05:31.292 ' 00:05:31.292 20:46:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.292 --rc genhtml_branch_coverage=1 00:05:31.292 --rc genhtml_function_coverage=1 00:05:31.292 --rc genhtml_legend=1 00:05:31.292 --rc geninfo_all_blocks=1 00:05:31.292 --rc geninfo_unexecuted_blocks=1 00:05:31.292 00:05:31.292 ' 00:05:31.292 20:46:22 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.292 --rc genhtml_branch_coverage=1 00:05:31.292 --rc genhtml_function_coverage=1 00:05:31.292 --rc genhtml_legend=1 00:05:31.292 --rc geninfo_all_blocks=1 00:05:31.292 --rc geninfo_unexecuted_blocks=1 00:05:31.292 00:05:31.292 ' 00:05:31.292 20:46:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:31.292 20:46:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:31.292 20:46:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:31.292 20:46:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:31.292 20:46:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.292 20:46:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.292 ************************************ 00:05:31.292 START TEST nvmf_target_core 00:05:31.292 ************************************ 00:05:31.292 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:31.550 * Looking for test storage... 00:05:31.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.550 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.551 --rc genhtml_branch_coverage=1 00:05:31.551 --rc genhtml_function_coverage=1 00:05:31.551 --rc genhtml_legend=1 00:05:31.551 --rc geninfo_all_blocks=1 00:05:31.551 --rc geninfo_unexecuted_blocks=1 00:05:31.551 00:05:31.551 ' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.551 --rc genhtml_branch_coverage=1 00:05:31.551 --rc genhtml_function_coverage=1 00:05:31.551 --rc genhtml_legend=1 00:05:31.551 --rc geninfo_all_blocks=1 00:05:31.551 --rc geninfo_unexecuted_blocks=1 00:05:31.551 00:05:31.551 ' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.551 --rc genhtml_branch_coverage=1 00:05:31.551 --rc genhtml_function_coverage=1 00:05:31.551 --rc genhtml_legend=1 00:05:31.551 --rc geninfo_all_blocks=1 00:05:31.551 --rc geninfo_unexecuted_blocks=1 00:05:31.551 00:05:31.551 ' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.551 --rc genhtml_branch_coverage=1 00:05:31.551 --rc genhtml_function_coverage=1 00:05:31.551 --rc genhtml_legend=1 00:05:31.551 --rc geninfo_all_blocks=1 00:05:31.551 --rc geninfo_unexecuted_blocks=1 00:05:31.551 00:05:31.551 ' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:31.551 ************************************ 00:05:31.551 START TEST nvmf_abort 00:05:31.551 ************************************ 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:31.551 * Looking for test storage... 00:05:31.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.551 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.810 --rc genhtml_branch_coverage=1 00:05:31.810 --rc genhtml_function_coverage=1 00:05:31.810 --rc genhtml_legend=1 00:05:31.810 --rc geninfo_all_blocks=1 00:05:31.810 --rc geninfo_unexecuted_blocks=1 00:05:31.810 00:05:31.810 ' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.810 --rc genhtml_branch_coverage=1 00:05:31.810 --rc genhtml_function_coverage=1 00:05:31.810 --rc genhtml_legend=1 00:05:31.810 --rc geninfo_all_blocks=1 00:05:31.810 --rc geninfo_unexecuted_blocks=1 00:05:31.810 00:05:31.810 ' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.810 --rc genhtml_branch_coverage=1 00:05:31.810 --rc genhtml_function_coverage=1 00:05:31.810 --rc genhtml_legend=1 00:05:31.810 --rc geninfo_all_blocks=1 00:05:31.810 --rc geninfo_unexecuted_blocks=1 00:05:31.810 00:05:31.810 ' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.810 --rc genhtml_branch_coverage=1 00:05:31.810 --rc genhtml_function_coverage=1 00:05:31.810 --rc genhtml_legend=1 00:05:31.810 --rc geninfo_all_blocks=1 00:05:31.810 --rc geninfo_unexecuted_blocks=1 00:05:31.810 00:05:31.810 ' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.810 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.811 20:46:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:33.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:33.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:33.716 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:33.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:33.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:33.717 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:33.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:33.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:05:33.975 00:05:33.975 --- 10.0.0.2 ping statistics --- 00:05:33.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:33.975 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:33.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:33.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:05:33.975 00:05:33.975 --- 10.0.0.1 ping statistics --- 00:05:33.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:33.975 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:33.975 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3857285 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3857285 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3857285 ']' 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.976 20:46:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.976 [2024-11-26 20:46:24.829355] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:33.976 [2024-11-26 20:46:24.829447] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:33.976 [2024-11-26 20:46:24.902657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.234 [2024-11-26 20:46:24.964204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.234 [2024-11-26 20:46:24.964285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.234 [2024-11-26 20:46:24.964299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.234 [2024-11-26 20:46:24.964310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.234 [2024-11-26 20:46:24.964318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.234 [2024-11-26 20:46:24.965873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.234 [2024-11-26 20:46:24.965929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.234 [2024-11-26 20:46:24.965933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.234 [2024-11-26 20:46:25.123821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:34.234 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.235 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.235 Malloc0 00:05:34.235 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.235 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:34.235 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.235 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.492 Delay0 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.492 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.493 [2024-11-26 20:46:25.197210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.493 20:46:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:34.493 [2024-11-26 20:46:25.352823] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:37.024 Initializing NVMe Controllers 00:05:37.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:37.024 controller IO queue size 128 less than required 00:05:37.024 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:37.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:37.024 Initialization complete. Launching workers. 00:05:37.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28649 00:05:37.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28710, failed to submit 62 00:05:37.024 success 28653, unsuccessful 57, failed 0 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:37.024 rmmod nvme_tcp 00:05:37.024 rmmod nvme_fabrics 00:05:37.024 rmmod nvme_keyring 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:37.024 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3857285 ']' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3857285 ']' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857285' 00:05:37.025 killing process with pid 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3857285 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.025 20:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:39.560 00:05:39.560 real 0m7.521s 00:05:39.560 user 0m11.011s 00:05:39.560 sys 0m2.599s 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 END TEST nvmf_abort 00:05:39.560 ************************************ 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 START TEST nvmf_ns_hotplug_stress 00:05:39.560 ************************************ 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.560 * Looking for test storage... 00:05:39.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.560 20:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:39.560 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.561 --rc genhtml_branch_coverage=1 00:05:39.561 --rc genhtml_function_coverage=1 00:05:39.561 --rc genhtml_legend=1 00:05:39.561 --rc geninfo_all_blocks=1 00:05:39.561 --rc geninfo_unexecuted_blocks=1 00:05:39.561 00:05:39.561 ' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.561 --rc genhtml_branch_coverage=1 00:05:39.561 --rc genhtml_function_coverage=1 00:05:39.561 --rc genhtml_legend=1 00:05:39.561 --rc geninfo_all_blocks=1 00:05:39.561 --rc geninfo_unexecuted_blocks=1 00:05:39.561 00:05:39.561 ' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.561 --rc genhtml_branch_coverage=1 00:05:39.561 --rc genhtml_function_coverage=1 00:05:39.561 --rc genhtml_legend=1 00:05:39.561 --rc geninfo_all_blocks=1 00:05:39.561 --rc geninfo_unexecuted_blocks=1 00:05:39.561 00:05:39.561 ' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.561 --rc genhtml_branch_coverage=1 00:05:39.561 --rc genhtml_function_coverage=1 00:05:39.561 --rc genhtml_legend=1 00:05:39.561 --rc geninfo_all_blocks=1 00:05:39.561 --rc geninfo_unexecuted_blocks=1 00:05:39.561 00:05:39.561 ' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.561 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:39.562 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:39.562 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:39.562 20:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:41.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:41.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:41.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:41.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:41.463 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:41.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:41.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:05:41.464 00:05:41.464 --- 10.0.0.2 ping statistics --- 00:05:41.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.464 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:41.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:41.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:05:41.464 00:05:41.464 --- 10.0.0.1 ping statistics --- 00:05:41.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.464 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3859647 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3859647 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3859647 ']' 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.464 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.464 [2024-11-26 20:46:32.347529] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:05:41.464 [2024-11-26 20:46:32.347622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:41.721 [2024-11-26 20:46:32.420560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.721 [2024-11-26 20:46:32.479495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:41.721 [2024-11-26 20:46:32.479552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:41.721 [2024-11-26 20:46:32.479565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.722 [2024-11-26 20:46:32.479575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.722 [2024-11-26 20:46:32.479585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:41.722 [2024-11-26 20:46:32.481105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.722 [2024-11-26 20:46:32.481149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.722 [2024-11-26 20:46:32.481153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:41.722 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:41.979 [2024-11-26 20:46:32.871281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.979 20:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:42.236 20:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:42.493 [2024-11-26 20:46:33.409950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:42.750 20:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:43.007 20:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:43.264 Malloc0 00:05:43.264 20:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:43.521 Delay0 00:05:43.521 20:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.779 20:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:44.036 NULL1 00:05:44.036 20:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:44.293 20:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3859954 00:05:44.293 20:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:44.293 20:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:44.293 20:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.664 Read completed with error (sct=0, sc=11) 00:05:45.664 20:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.921 20:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:45.921 20:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:46.178 true 00:05:46.178 20:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:46.178 20:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.744 20:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.002 20:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:47.002 20:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:47.259 true 00:05:47.517 20:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:47.517 20:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.774 20:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.031 20:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:48.031 20:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:48.289 true 00:05:48.289 20:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:48.289 20:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.853 20:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.111 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:49.111 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:49.369 true 00:05:49.369 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:49.369 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.932 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.190 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:50.190 20:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:50.454 true 00:05:50.454 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:50.454 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.772 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.060 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:51.060 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:51.060 true 00:05:51.060 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:51.060 20:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.993 20:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.250 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:52.250 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:52.508 true 00:05:52.508 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:52.508 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.766 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.024 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:53.024 20:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:53.282 true 00:05:53.282 20:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:53.282 20:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.540 20:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.105 20:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:54.105 20:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:54.105 true 00:05:54.105 20:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:54.105 20:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.037 20:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.295 20:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:55.295 20:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:55.553 true 00:05:55.553 20:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:55.553 20:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.118 20:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.118 20:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:56.118 20:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:56.376 true 00:05:56.376 20:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:56.376 20:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.308 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.566 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:57.566 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:57.824 true 00:05:57.824 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:57.824 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.082 20:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.340 20:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:58.340 20:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:58.598 true 00:05:58.598 20:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:58.598 20:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.855 20:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.421 20:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:59.421 20:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:59.421 true 00:05:59.421 20:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:05:59.421 20:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.354 20:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.870 20:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:00.870 20:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:01.127 true 00:06:01.127 20:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:01.127 20:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.385 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.642 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:01.642 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:01.900 true 00:06:01.900 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:01.900 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.158 20:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.415 20:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:02.415 20:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:02.672 true 00:06:02.673 20:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:02.673 20:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.605 20:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.862 20:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:03.862 20:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:04.120 true 00:06:04.120 20:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:04.120 20:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.377 20:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.635 20:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:04.635 20:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:04.892 true 00:06:04.892 20:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:04.892 20:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.150 20:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.483 20:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:05.483 20:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:05.740 true 00:06:05.740 20:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:05.740 20:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.672 20:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.930 20:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:06.930 20:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:07.187 true 00:06:07.187 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:07.187 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.445 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.703 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:07.703 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:07.960 true 00:06:07.960 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:07.960 20:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.892 20:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.150 20:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:09.150 20:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:09.407 true 00:06:09.407 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:09.407 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.665 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.922 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:09.922 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:10.181 true 00:06:10.181 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:10.181 20:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.438 20:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.695 20:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:10.695 20:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:10.953 true 00:06:10.953 20:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:10.953 20:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.886 20:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.143 20:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:12.143 20:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:12.401 true 00:06:12.401 20:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:12.401 20:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.658 20:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.916 20:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:12.916 20:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:13.173 true 00:06:13.173 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:13.173 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.431 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.688 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:13.688 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:13.947 true 00:06:13.947 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:13.947 20:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.879 Initializing NVMe Controllers 00:06:14.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:14.879 Controller IO queue size 128, less than required. 00:06:14.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:14.879 Controller IO queue size 128, less than required. 00:06:14.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:14.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:14.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:14.879 Initialization complete. Launching workers. 00:06:14.879 ======================================================== 00:06:14.879 Latency(us) 00:06:14.879 Device Information : IOPS MiB/s Average min max 00:06:14.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 515.20 0.25 109606.27 3489.89 1033314.48 00:06:14.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8857.40 4.32 14451.25 1714.00 449248.00 00:06:14.879 ======================================================== 00:06:14.879 Total : 9372.60 4.58 19681.80 1714.00 1033314.48 00:06:14.879 00:06:14.879 20:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.137 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:15.137 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:15.395 true 00:06:15.395 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859954 00:06:15.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3859954) - No such process 00:06:15.395 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3859954 00:06:15.395 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.653 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.911 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:15.911 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:15.911 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:15.911 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.911 20:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:16.170 null0 00:06:16.170 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.170 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.170 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:16.429 null1 00:06:16.686 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.686 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.686 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:16.686 null2 00:06:16.943 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.943 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.943 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:17.201 null3 00:06:17.201 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.201 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.201 20:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:17.459 null4 00:06:17.459 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.459 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.459 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:17.716 null5 00:06:17.716 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.716 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.716 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:17.974 null6 00:06:17.974 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.974 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.974 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:18.233 null7 00:06:18.233 20:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.233 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3864151 3864152 3864154 3864156 3864158 3864160 3864162 3864164 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.234 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.492 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.749 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.750 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.007 20:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.573 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.831 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.088 20:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.346 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.604 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.861 20:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.118 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.376 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.376 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.376 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.634 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.892 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.893 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.149 20:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.406 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.663 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.229 20:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.486 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.744 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.002 20:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.260 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.260 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.260 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.261 rmmod nvme_tcp 00:06:24.261 rmmod nvme_fabrics 00:06:24.261 rmmod nvme_keyring 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3859647 ']' 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3859647 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3859647 ']' 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3859647 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.261 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859647 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859647' 00:06:24.520 killing process with pid 3859647 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3859647 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3859647 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.520 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.780 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.780 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.780 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.780 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.780 20:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.681 00:06:26.681 real 0m47.567s 00:06:26.681 user 3m41.562s 00:06:26.681 sys 0m16.065s 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.681 ************************************ 00:06:26.681 END TEST nvmf_ns_hotplug_stress 00:06:26.681 ************************************ 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.681 ************************************ 00:06:26.681 START TEST nvmf_delete_subsystem 00:06:26.681 ************************************ 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.681 * Looking for test storage... 00:06:26.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.681 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:26.941 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.942 20:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.849 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.850 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:06:29.108 00:06:29.108 --- 10.0.0.2 ping statistics --- 00:06:29.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.108 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:06:29.108 00:06:29.108 --- 10.0.0.1 ping statistics --- 00:06:29.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.108 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3866932 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3866932 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3866932 ']' 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.108 20:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.108 [2024-11-26 20:47:19.966435] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:29.108 [2024-11-26 20:47:19.966524] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.368 [2024-11-26 20:47:20.048136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.368 [2024-11-26 20:47:20.114202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.368 [2024-11-26 20:47:20.114252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.368 [2024-11-26 20:47:20.114269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.368 [2024-11-26 20:47:20.114283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.368 [2024-11-26 20:47:20.114296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.368 [2024-11-26 20:47:20.115736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.369 [2024-11-26 20:47:20.115743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 [2024-11-26 20:47:20.272492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 [2024-11-26 20:47:20.288803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.369 NULL1 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.369 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.626 Delay0 00:06:29.626 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.626 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3867080 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:29.627 20:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:29.627 [2024-11-26 20:47:20.373558] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:31.523 20:47:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:31.523 20:47:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.523 20:47:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Write completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 starting I/O failed: -6 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.523 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 [2024-11-26 20:47:22.416339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe8cc000c40 is same with the state(6) to be set 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 Write completed with error (sct=0, sc=8) 00:06:31.524 Read completed with error (sct=0, sc=8) 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:31.524 starting I/O failed: -6 00:06:32.456 [2024-11-26 20:47:23.388481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd39b0 is same with the state(6) to be set 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 [2024-11-26 20:47:23.415353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd2860 is same with the state(6) to be set 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 [2024-11-26 20:47:23.419179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe8cc00d020 is same with the state(6) to be set 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Write completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.714 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 [2024-11-26 20:47:23.419448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe8cc00d680 is same with the state(6) to be set 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 Read completed with error (sct=0, sc=8) 00:06:32.715 Write completed with error (sct=0, sc=8) 00:06:32.715 [2024-11-26 20:47:23.419781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd24a0 is same with the state(6) to be set 00:06:32.715 Initializing NVMe Controllers 00:06:32.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:32.715 Controller IO queue size 128, less than required. 00:06:32.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:32.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:32.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:32.715 Initialization complete. Launching workers. 00:06:32.715 ======================================================== 00:06:32.715 Latency(us) 00:06:32.715 Device Information : IOPS MiB/s Average min max 00:06:32.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.16 0.09 922898.02 641.23 1011622.05 00:06:32.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.34 0.08 968325.72 559.67 2002280.02 00:06:32.715 ======================================================== 00:06:32.715 Total : 337.49 0.16 943941.74 559.67 2002280.02 00:06:32.715 00:06:32.715 [2024-11-26 20:47:23.420936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd39b0 (9): Bad file descriptor 00:06:32.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:32.715 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.715 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:32.715 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3867080 00:06:32.715 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3867080 00:06:33.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3867080) - No such process 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3867080 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3867080 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.280 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3867080 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.281 [2024-11-26 20:47:23.943537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3867482 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.281 20:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:33.281 [2024-11-26 20:47:24.016755] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:33.539 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.539 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:33.539 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.105 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.105 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:34.105 20:47:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.670 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.670 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:34.670 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.240 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.240 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:35.240 20:47:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.806 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.806 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:35.806 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.064 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.064 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:36.064 20:47:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.322 Initializing NVMe Controllers 00:06:36.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.322 Controller IO queue size 128, less than required. 00:06:36.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:36.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:36.322 Initialization complete. Launching workers. 00:06:36.322 ======================================================== 00:06:36.322 Latency(us) 00:06:36.322 Device Information : IOPS MiB/s Average min max 00:06:36.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004884.64 1000184.28 1041399.89 00:06:36.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004873.41 1000218.48 1041337.75 00:06:36.322 ======================================================== 00:06:36.322 Total : 256.00 0.12 1004879.03 1000184.28 1041399.89 00:06:36.322 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3867482 00:06:36.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3867482) - No such process 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3867482 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.581 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.581 rmmod nvme_tcp 00:06:36.581 rmmod nvme_fabrics 00:06:36.840 rmmod nvme_keyring 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3866932 ']' 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3866932 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3866932 ']' 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3866932 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866932 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866932' 00:06:36.840 killing process with pid 3866932 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3866932 00:06:36.840 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3866932 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.100 20:47:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.006 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:39.006 00:06:39.006 real 0m12.323s 00:06:39.007 user 0m27.810s 00:06:39.007 sys 0m2.904s 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.007 ************************************ 00:06:39.007 END TEST nvmf_delete_subsystem 00:06:39.007 ************************************ 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.007 ************************************ 00:06:39.007 START TEST nvmf_host_management 00:06:39.007 ************************************ 00:06:39.007 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:39.267 * Looking for test storage... 00:06:39.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.267 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.267 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.267 20:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.267 --rc genhtml_branch_coverage=1 00:06:39.267 --rc genhtml_function_coverage=1 00:06:39.267 --rc genhtml_legend=1 00:06:39.267 --rc geninfo_all_blocks=1 00:06:39.267 --rc geninfo_unexecuted_blocks=1 00:06:39.267 00:06:39.267 ' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.267 --rc genhtml_branch_coverage=1 00:06:39.267 --rc genhtml_function_coverage=1 00:06:39.267 --rc genhtml_legend=1 00:06:39.267 --rc geninfo_all_blocks=1 00:06:39.267 --rc geninfo_unexecuted_blocks=1 00:06:39.267 00:06:39.267 ' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.267 --rc genhtml_branch_coverage=1 00:06:39.267 --rc genhtml_function_coverage=1 00:06:39.267 --rc genhtml_legend=1 00:06:39.267 --rc geninfo_all_blocks=1 00:06:39.267 --rc geninfo_unexecuted_blocks=1 00:06:39.267 00:06:39.267 ' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.267 --rc genhtml_branch_coverage=1 00:06:39.267 --rc genhtml_function_coverage=1 00:06:39.267 --rc genhtml_legend=1 00:06:39.267 --rc geninfo_all_blocks=1 00:06:39.267 --rc geninfo_unexecuted_blocks=1 00:06:39.267 00:06:39.267 ' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.267 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:39.268 20:47:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.172 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.173 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.173 20:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:41.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:41.173 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:41.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:41.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.173 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:06:41.432 00:06:41.432 --- 10.0.0.2 ping statistics --- 00:06:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.432 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:06:41.432 00:06:41.432 --- 10.0.0.1 ping statistics --- 00:06:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.432 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3869835 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3869835 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3869835 ']' 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.432 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.432 [2024-11-26 20:47:32.222251] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:41.432 [2024-11-26 20:47:32.222343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.432 [2024-11-26 20:47:32.300492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.432 [2024-11-26 20:47:32.366104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.432 [2024-11-26 20:47:32.366163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.432 [2024-11-26 20:47:32.366188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.432 [2024-11-26 20:47:32.366202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.432 [2024-11-26 20:47:32.366214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.432 [2024-11-26 20:47:32.367955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.432 [2024-11-26 20:47:32.368029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.432 [2024-11-26 20:47:32.368118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.432 [2024-11-26 20:47:32.368121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 [2024-11-26 20:47:32.508046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 Malloc0 00:06:41.690 [2024-11-26 20:47:32.575338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3870004 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3870004 /var/tmp/bdevperf.sock 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3870004 ']' 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:41.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:41.690 { 00:06:41.690 "params": { 00:06:41.690 "name": "Nvme$subsystem", 00:06:41.690 "trtype": "$TEST_TRANSPORT", 00:06:41.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.690 "adrfam": "ipv4", 00:06:41.690 "trsvcid": "$NVMF_PORT", 00:06:41.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.690 "hdgst": ${hdgst:-false}, 00:06:41.690 "ddgst": ${ddgst:-false} 00:06:41.690 }, 00:06:41.690 "method": "bdev_nvme_attach_controller" 00:06:41.690 } 00:06:41.690 EOF 00:06:41.690 )") 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:41.690 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:41.691 20:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:41.691 "params": { 00:06:41.691 "name": "Nvme0", 00:06:41.691 "trtype": "tcp", 00:06:41.691 "traddr": "10.0.0.2", 00:06:41.691 "adrfam": "ipv4", 00:06:41.691 "trsvcid": "4420", 00:06:41.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.691 "hdgst": false, 00:06:41.691 "ddgst": false 00:06:41.691 }, 00:06:41.691 "method": "bdev_nvme_attach_controller" 00:06:41.691 }' 00:06:41.956 [2024-11-26 20:47:32.658659] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:41.956 [2024-11-26 20:47:32.658766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3870004 ] 00:06:41.956 [2024-11-26 20:47:32.729549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.956 [2024-11-26 20:47:32.789325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.217 Running I/O for 10 seconds... 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:42.477 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=536 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 536 -ge 100 ']' 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.738 [2024-11-26 20:47:33.530476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.530995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 [2024-11-26 20:47:33.531120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf10 is same with the state(6) to be set 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.738 20:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:42.738 [2024-11-26 20:47:33.548878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.738 [2024-11-26 20:47:33.548921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.548939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.739 [2024-11-26 20:47:33.548953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.548967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.739 [2024-11-26 20:47:33.548986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.739 [2024-11-26 20:47:33.549014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1983a50 is same with the state(6) to be set 00:06:42.739 [2024-11-26 20:47:33.549124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.549967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.549982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.739 [2024-11-26 20:47:33.550231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.739 [2024-11-26 20:47:33.550245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.550951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.551005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.551020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.551033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.551047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.551060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.551074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.740 [2024-11-26 20:47:33.551088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.740 [2024-11-26 20:47:33.552297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:42.740 task offset: 81792 on job bdev=Nvme0n1 fails 00:06:42.740 00:06:42.740 Latency(us) 00:06:42.740 [2024-11-26T19:47:33.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.740 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:42.740 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:42.740 Verification LBA range: start 0x0 length 0x400 00:06:42.740 Nvme0n1 : 0.42 1526.41 95.40 152.88 0.00 37063.18 2609.30 34952.53 00:06:42.740 [2024-11-26T19:47:33.678Z] =================================================================================================================== 00:06:42.740 [2024-11-26T19:47:33.678Z] Total : 1526.41 95.40 152.88 0.00 37063.18 2609.30 34952.53 00:06:42.740 [2024-11-26 20:47:33.554191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.740 [2024-11-26 20:47:33.554223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1983a50 (9): Bad file descriptor 00:06:42.999 [2024-11-26 20:47:33.686873] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3870004 00:06:43.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3870004) - No such process 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:43.933 { 00:06:43.933 "params": { 00:06:43.933 "name": "Nvme$subsystem", 00:06:43.933 "trtype": "$TEST_TRANSPORT", 00:06:43.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:43.933 "adrfam": "ipv4", 00:06:43.933 "trsvcid": "$NVMF_PORT", 00:06:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:43.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:43.933 "hdgst": ${hdgst:-false}, 00:06:43.933 "ddgst": ${ddgst:-false} 00:06:43.933 }, 00:06:43.933 "method": "bdev_nvme_attach_controller" 00:06:43.933 } 00:06:43.933 EOF 00:06:43.933 )") 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:43.933 20:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:43.933 "params": { 00:06:43.933 "name": "Nvme0", 00:06:43.933 "trtype": "tcp", 00:06:43.933 "traddr": "10.0.0.2", 00:06:43.933 "adrfam": "ipv4", 00:06:43.933 "trsvcid": "4420", 00:06:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:43.933 "hdgst": false, 00:06:43.933 "ddgst": false 00:06:43.933 }, 00:06:43.933 "method": "bdev_nvme_attach_controller" 00:06:43.933 }' 00:06:43.933 [2024-11-26 20:47:34.596396] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:43.933 [2024-11-26 20:47:34.596478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3870164 ] 00:06:43.933 [2024-11-26 20:47:34.665109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.933 [2024-11-26 20:47:34.723236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.192 Running I/O for 1 seconds... 00:06:45.126 1547.00 IOPS, 96.69 MiB/s 00:06:45.126 Latency(us) 00:06:45.126 [2024-11-26T19:47:36.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.126 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:45.126 Verification LBA range: start 0x0 length 0x400 00:06:45.126 Nvme0n1 : 1.01 1598.37 99.90 0.00 0.00 39245.62 1808.31 33787.45 00:06:45.126 [2024-11-26T19:47:36.064Z] =================================================================================================================== 00:06:45.126 [2024-11-26T19:47:36.064Z] Total : 1598.37 99.90 0.00 0.00 39245.62 1808.31 33787.45 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:45.384 rmmod nvme_tcp 00:06:45.384 rmmod nvme_fabrics 00:06:45.384 rmmod nvme_keyring 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3869835 ']' 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3869835 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3869835 ']' 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3869835 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3869835 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3869835' 00:06:45.384 killing process with pid 3869835 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3869835 00:06:45.384 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3869835 00:06:45.644 [2024-11-26 20:47:36.472276] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.644 20:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:48.182 00:06:48.182 real 0m8.648s 00:06:48.182 user 0m19.599s 00:06:48.182 sys 0m2.683s 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.182 ************************************ 00:06:48.182 END TEST nvmf_host_management 00:06:48.182 ************************************ 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.182 ************************************ 00:06:48.182 START TEST nvmf_lvol 00:06:48.182 ************************************ 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:48.182 * Looking for test storage... 00:06:48.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.182 --rc genhtml_branch_coverage=1 00:06:48.182 --rc genhtml_function_coverage=1 00:06:48.182 --rc genhtml_legend=1 00:06:48.182 --rc geninfo_all_blocks=1 00:06:48.182 --rc geninfo_unexecuted_blocks=1 00:06:48.182 00:06:48.182 ' 00:06:48.182 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.183 --rc genhtml_branch_coverage=1 00:06:48.183 --rc genhtml_function_coverage=1 00:06:48.183 --rc genhtml_legend=1 00:06:48.183 --rc geninfo_all_blocks=1 00:06:48.183 --rc geninfo_unexecuted_blocks=1 00:06:48.183 00:06:48.183 ' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.183 --rc genhtml_branch_coverage=1 00:06:48.183 --rc genhtml_function_coverage=1 00:06:48.183 --rc genhtml_legend=1 00:06:48.183 --rc geninfo_all_blocks=1 00:06:48.183 --rc geninfo_unexecuted_blocks=1 00:06:48.183 00:06:48.183 ' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.183 --rc genhtml_branch_coverage=1 00:06:48.183 --rc genhtml_function_coverage=1 00:06:48.183 --rc genhtml_legend=1 00:06:48.183 --rc geninfo_all_blocks=1 00:06:48.183 --rc geninfo_unexecuted_blocks=1 00:06:48.183 00:06:48.183 ' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.183 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.184 20:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.166 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:50.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:50.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:50.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:50.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:06:50.167 00:06:50.167 --- 10.0.0.2 ping statistics --- 00:06:50.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.167 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:06:50.167 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:06:50.167 00:06:50.168 --- 10.0.0.1 ping statistics --- 00:06:50.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.168 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3872380 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3872380 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3872380 ']' 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.168 20:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.168 [2024-11-26 20:47:40.956312] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:06:50.168 [2024-11-26 20:47:40.956409] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.168 [2024-11-26 20:47:41.037065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.446 [2024-11-26 20:47:41.099854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.446 [2024-11-26 20:47:41.099919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.446 [2024-11-26 20:47:41.099945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.446 [2024-11-26 20:47:41.099959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.446 [2024-11-26 20:47:41.099970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.446 [2024-11-26 20:47:41.101588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.446 [2024-11-26 20:47:41.101661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.446 [2024-11-26 20:47:41.101665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.446 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.704 [2024-11-26 20:47:41.499188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.704 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.962 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:50.962 20:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:51.220 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:51.220 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:51.478 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:52.043 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b6cb2763-69df-4e72-b06d-4586550103d2 00:06:52.043 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6cb2763-69df-4e72-b06d-4586550103d2 lvol 20 00:06:52.043 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=723cc96f-f899-4475-ae19-2e861897dd98 00:06:52.043 20:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.607 20:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 723cc96f-f899-4475-ae19-2e861897dd98 00:06:52.607 20:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.865 [2024-11-26 20:47:43.765234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.865 20:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.122 20:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3872810 00:06:53.122 20:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:53.122 20:47:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:54.496 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 723cc96f-f899-4475-ae19-2e861897dd98 MY_SNAPSHOT 00:06:54.496 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6cf38cb5-cd12-4938-b332-08e8f395eebf 00:06:54.496 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 723cc96f-f899-4475-ae19-2e861897dd98 30 00:06:54.754 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6cf38cb5-cd12-4938-b332-08e8f395eebf MY_CLONE 00:06:55.320 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c12d0b21-3c0c-4c10-8767-5c1a7474f8e9 00:06:55.320 20:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c12d0b21-3c0c-4c10-8767-5c1a7474f8e9 00:06:55.886 20:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3872810 00:07:03.991 Initializing NVMe Controllers 00:07:03.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:03.991 Controller IO queue size 128, less than required. 00:07:03.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:03.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:03.991 Initialization complete. Launching workers. 00:07:03.991 ======================================================== 00:07:03.991 Latency(us) 00:07:03.991 Device Information : IOPS MiB/s Average min max 00:07:03.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10539.80 41.17 12144.96 2258.04 58803.02 00:07:03.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10466.10 40.88 12232.13 2147.16 53306.06 00:07:03.991 ======================================================== 00:07:03.991 Total : 21005.90 82.05 12188.39 2147.16 58803.02 00:07:03.991 00:07:03.991 20:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:03.991 20:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 723cc96f-f899-4475-ae19-2e861897dd98 00:07:04.249 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6cb2763-69df-4e72-b06d-4586550103d2 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.507 rmmod nvme_tcp 00:07:04.507 rmmod nvme_fabrics 00:07:04.507 rmmod nvme_keyring 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3872380 ']' 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3872380 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3872380 ']' 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3872380 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872380 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872380' 00:07:04.507 killing process with pid 3872380 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3872380 00:07:04.507 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3872380 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:04.765 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.766 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:05.025 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:05.025 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:05.025 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.025 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.025 20:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:06.929 00:07:06.929 real 0m19.128s 00:07:06.929 user 1m5.927s 00:07:06.929 sys 0m5.223s 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.929 ************************************ 00:07:06.929 END TEST nvmf_lvol 00:07:06.929 ************************************ 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.929 ************************************ 00:07:06.929 START TEST nvmf_lvs_grow 00:07:06.929 ************************************ 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:06.929 * Looking for test storage... 00:07:06.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.929 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.188 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.189 --rc genhtml_branch_coverage=1 00:07:07.189 --rc genhtml_function_coverage=1 00:07:07.189 --rc genhtml_legend=1 00:07:07.189 --rc geninfo_all_blocks=1 00:07:07.189 --rc geninfo_unexecuted_blocks=1 00:07:07.189 00:07:07.189 ' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.189 --rc genhtml_branch_coverage=1 00:07:07.189 --rc genhtml_function_coverage=1 00:07:07.189 --rc genhtml_legend=1 00:07:07.189 --rc geninfo_all_blocks=1 00:07:07.189 --rc geninfo_unexecuted_blocks=1 00:07:07.189 00:07:07.189 ' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.189 --rc genhtml_branch_coverage=1 00:07:07.189 --rc genhtml_function_coverage=1 00:07:07.189 --rc genhtml_legend=1 00:07:07.189 --rc geninfo_all_blocks=1 00:07:07.189 --rc geninfo_unexecuted_blocks=1 00:07:07.189 00:07:07.189 ' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.189 --rc genhtml_branch_coverage=1 00:07:07.189 --rc genhtml_function_coverage=1 00:07:07.189 --rc genhtml_legend=1 00:07:07.189 --rc geninfo_all_blocks=1 00:07:07.189 --rc geninfo_unexecuted_blocks=1 00:07:07.189 00:07:07.189 ' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.189 20:47:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.091 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:09.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:09.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:09.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:09.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.092 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:09.350 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:09.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:07:09.351 00:07:09.351 --- 10.0.0.2 ping statistics --- 00:07:09.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.351 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:09.351 00:07:09.351 --- 10.0.0.1 ping statistics --- 00:07:09.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.351 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3876117 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3876117 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3876117 ']' 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.351 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.351 [2024-11-26 20:48:00.243509] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:09.351 [2024-11-26 20:48:00.243592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.609 [2024-11-26 20:48:00.326484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.609 [2024-11-26 20:48:00.388568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.609 [2024-11-26 20:48:00.388632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.609 [2024-11-26 20:48:00.388656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.609 [2024-11-26 20:48:00.388668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.609 [2024-11-26 20:48:00.388678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.609 [2024-11-26 20:48:00.389288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.609 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:09.867 [2024-11-26 20:48:00.799601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.125 ************************************ 00:07:10.125 START TEST lvs_grow_clean 00:07:10.125 ************************************ 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.125 20:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.383 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.383 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.640 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:10.641 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:10.641 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.898 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.898 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.898 20:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 780923ff-4289-46a3-9042-ddb8c0e1b074 lvol 150 00:07:11.156 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=403ecf0d-378c-4a3d-a425-86990bc73107 00:07:11.156 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.156 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.413 [2024-11-26 20:48:02.267199] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.413 [2024-11-26 20:48:02.267292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.413 true 00:07:11.413 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:11.413 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.670 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.670 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.928 20:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 403ecf0d-378c-4a3d-a425-86990bc73107 00:07:12.493 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.493 [2024-11-26 20:48:03.374595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.493 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3876648 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3876648 /var/tmp/bdevperf.sock 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3876648 ']' 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.751 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:13.009 [2024-11-26 20:48:03.725249] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:13.009 [2024-11-26 20:48:03.725324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876648 ] 00:07:13.009 [2024-11-26 20:48:03.797500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.009 [2024-11-26 20:48:03.860555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.267 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.267 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:13.267 20:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.525 Nvme0n1 00:07:13.525 20:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.782 [ 00:07:13.782 { 00:07:13.782 "name": "Nvme0n1", 00:07:13.782 "aliases": [ 00:07:13.782 "403ecf0d-378c-4a3d-a425-86990bc73107" 00:07:13.782 ], 00:07:13.782 "product_name": "NVMe disk", 00:07:13.782 "block_size": 4096, 00:07:13.782 "num_blocks": 38912, 00:07:13.782 "uuid": "403ecf0d-378c-4a3d-a425-86990bc73107", 00:07:13.782 "numa_id": 0, 00:07:13.782 "assigned_rate_limits": { 00:07:13.782 "rw_ios_per_sec": 0, 00:07:13.782 "rw_mbytes_per_sec": 0, 00:07:13.782 "r_mbytes_per_sec": 0, 00:07:13.782 "w_mbytes_per_sec": 0 00:07:13.782 }, 00:07:13.782 "claimed": false, 00:07:13.782 "zoned": false, 00:07:13.782 "supported_io_types": { 00:07:13.782 "read": true, 00:07:13.782 "write": true, 00:07:13.782 "unmap": true, 00:07:13.782 "flush": true, 00:07:13.782 "reset": true, 00:07:13.782 "nvme_admin": true, 00:07:13.782 "nvme_io": true, 00:07:13.782 "nvme_io_md": false, 00:07:13.782 "write_zeroes": true, 00:07:13.782 "zcopy": false, 00:07:13.782 "get_zone_info": false, 00:07:13.782 "zone_management": false, 00:07:13.782 "zone_append": false, 00:07:13.782 "compare": true, 00:07:13.782 "compare_and_write": true, 00:07:13.782 "abort": true, 00:07:13.782 "seek_hole": false, 00:07:13.782 "seek_data": false, 00:07:13.782 "copy": true, 00:07:13.782 "nvme_iov_md": false 00:07:13.782 }, 00:07:13.782 "memory_domains": [ 00:07:13.782 { 00:07:13.782 "dma_device_id": "system", 00:07:13.782 "dma_device_type": 1 00:07:13.782 } 00:07:13.782 ], 00:07:13.782 "driver_specific": { 00:07:13.782 "nvme": [ 00:07:13.782 { 00:07:13.782 "trid": { 00:07:13.782 "trtype": "TCP", 00:07:13.782 "adrfam": "IPv4", 00:07:13.782 "traddr": "10.0.0.2", 00:07:13.782 "trsvcid": "4420", 00:07:13.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:13.782 }, 00:07:13.782 "ctrlr_data": { 00:07:13.782 "cntlid": 1, 00:07:13.782 "vendor_id": "0x8086", 00:07:13.782 "model_number": "SPDK bdev Controller", 00:07:13.782 "serial_number": "SPDK0", 00:07:13.782 "firmware_revision": "25.01", 00:07:13.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.782 "oacs": { 00:07:13.782 "security": 0, 00:07:13.782 "format": 0, 00:07:13.782 "firmware": 0, 00:07:13.782 "ns_manage": 0 00:07:13.782 }, 00:07:13.782 "multi_ctrlr": true, 00:07:13.782 "ana_reporting": false 00:07:13.782 }, 00:07:13.782 "vs": { 00:07:13.782 "nvme_version": "1.3" 00:07:13.782 }, 00:07:13.782 "ns_data": { 00:07:13.782 "id": 1, 00:07:13.782 "can_share": true 00:07:13.782 } 00:07:13.782 } 00:07:13.782 ], 00:07:13.782 "mp_policy": "active_passive" 00:07:13.782 } 00:07:13.782 } 00:07:13.782 ] 00:07:13.782 20:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3876778 00:07:13.782 20:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:13.782 20:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.041 Running I/O for 10 seconds... 00:07:14.974 Latency(us) 00:07:14.974 [2024-11-26T19:48:05.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.974 Nvme0n1 : 1.00 12827.00 50.11 0.00 0.00 0.00 0.00 0.00 00:07:14.974 [2024-11-26T19:48:05.912Z] =================================================================================================================== 00:07:14.974 [2024-11-26T19:48:05.912Z] Total : 12827.00 50.11 0.00 0.00 0.00 0.00 0.00 00:07:14.974 00:07:15.909 20:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:15.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.909 Nvme0n1 : 2.00 12890.50 50.35 0.00 0.00 0.00 0.00 0.00 00:07:15.909 [2024-11-26T19:48:06.847Z] =================================================================================================================== 00:07:15.909 [2024-11-26T19:48:06.848Z] Total : 12890.50 50.35 0.00 0.00 0.00 0.00 0.00 00:07:15.910 00:07:16.168 true 00:07:16.168 20:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:16.168 20:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.427 20:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.427 20:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.427 20:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3876778 00:07:16.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.996 Nvme0n1 : 3.00 12996.33 50.77 0.00 0.00 0.00 0.00 0.00 00:07:16.996 [2024-11-26T19:48:07.934Z] =================================================================================================================== 00:07:16.996 [2024-11-26T19:48:07.934Z] Total : 12996.33 50.77 0.00 0.00 0.00 0.00 0.00 00:07:16.996 00:07:17.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.930 Nvme0n1 : 4.00 13065.25 51.04 0.00 0.00 0.00 0.00 0.00 00:07:17.930 [2024-11-26T19:48:08.868Z] =================================================================================================================== 00:07:17.930 [2024-11-26T19:48:08.868Z] Total : 13065.25 51.04 0.00 0.00 0.00 0.00 0.00 00:07:17.930 00:07:19.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.303 Nvme0n1 : 5.00 13131.80 51.30 0.00 0.00 0.00 0.00 0.00 00:07:19.303 [2024-11-26T19:48:10.241Z] =================================================================================================================== 00:07:19.303 [2024-11-26T19:48:10.241Z] Total : 13131.80 51.30 0.00 0.00 0.00 0.00 0.00 00:07:19.303 00:07:20.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.236 Nvme0n1 : 6.00 13134.00 51.30 0.00 0.00 0.00 0.00 0.00 00:07:20.236 [2024-11-26T19:48:11.174Z] =================================================================================================================== 00:07:20.236 [2024-11-26T19:48:11.174Z] Total : 13134.00 51.30 0.00 0.00 0.00 0.00 0.00 00:07:20.236 00:07:21.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.210 Nvme0n1 : 7.00 13171.71 51.45 0.00 0.00 0.00 0.00 0.00 00:07:21.210 [2024-11-26T19:48:12.148Z] =================================================================================================================== 00:07:21.210 [2024-11-26T19:48:12.148Z] Total : 13171.71 51.45 0.00 0.00 0.00 0.00 0.00 00:07:21.210 00:07:22.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.144 Nvme0n1 : 8.00 13192.12 51.53 0.00 0.00 0.00 0.00 0.00 00:07:22.144 [2024-11-26T19:48:13.082Z] =================================================================================================================== 00:07:22.144 [2024-11-26T19:48:13.082Z] Total : 13192.12 51.53 0.00 0.00 0.00 0.00 0.00 00:07:22.144 00:07:23.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.080 Nvme0n1 : 9.00 13208.00 51.59 0.00 0.00 0.00 0.00 0.00 00:07:23.080 [2024-11-26T19:48:14.018Z] =================================================================================================================== 00:07:23.080 [2024-11-26T19:48:14.018Z] Total : 13208.00 51.59 0.00 0.00 0.00 0.00 0.00 00:07:23.080 00:07:24.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.014 Nvme0n1 : 10.00 13222.40 51.65 0.00 0.00 0.00 0.00 0.00 00:07:24.014 [2024-11-26T19:48:14.952Z] =================================================================================================================== 00:07:24.014 [2024-11-26T19:48:14.952Z] Total : 13222.40 51.65 0.00 0.00 0.00 0.00 0.00 00:07:24.014 00:07:24.014 00:07:24.014 Latency(us) 00:07:24.014 [2024-11-26T19:48:14.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.014 Nvme0n1 : 10.00 13227.48 51.67 0.00 0.00 9671.80 7573.05 19903.53 00:07:24.014 [2024-11-26T19:48:14.952Z] =================================================================================================================== 00:07:24.014 [2024-11-26T19:48:14.952Z] Total : 13227.48 51.67 0.00 0.00 9671.80 7573.05 19903.53 00:07:24.014 { 00:07:24.014 "results": [ 00:07:24.014 { 00:07:24.014 "job": "Nvme0n1", 00:07:24.014 "core_mask": "0x2", 00:07:24.014 "workload": "randwrite", 00:07:24.014 "status": "finished", 00:07:24.014 "queue_depth": 128, 00:07:24.014 "io_size": 4096, 00:07:24.014 "runtime": 10.004549, 00:07:24.014 "iops": 13227.482818066062, 00:07:24.014 "mibps": 51.66985475807056, 00:07:24.014 "io_failed": 0, 00:07:24.014 "io_timeout": 0, 00:07:24.014 "avg_latency_us": 9671.798457797202, 00:07:24.014 "min_latency_us": 7573.0488888888885, 00:07:24.014 "max_latency_us": 19903.525925925926 00:07:24.014 } 00:07:24.014 ], 00:07:24.014 "core_count": 1 00:07:24.014 } 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3876648 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3876648 ']' 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3876648 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3876648 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3876648' 00:07:24.014 killing process with pid 3876648 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3876648 00:07:24.014 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.014 00:07:24.014 Latency(us) 00:07:24.014 [2024-11-26T19:48:14.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.014 [2024-11-26T19:48:14.952Z] =================================================================================================================== 00:07:24.014 [2024-11-26T19:48:14.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.014 20:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3876648 00:07:24.273 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.531 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.789 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:24.789 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:25.048 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.048 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:25.048 20:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.307 [2024-11-26 20:48:16.226563] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.565 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.566 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.566 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.566 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.566 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.566 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:25.824 request: 00:07:25.824 { 00:07:25.824 "uuid": "780923ff-4289-46a3-9042-ddb8c0e1b074", 00:07:25.824 "method": "bdev_lvol_get_lvstores", 00:07:25.824 "req_id": 1 00:07:25.824 } 00:07:25.824 Got JSON-RPC error response 00:07:25.824 response: 00:07:25.824 { 00:07:25.824 "code": -19, 00:07:25.824 "message": "No such device" 00:07:25.824 } 00:07:25.824 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:25.824 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.824 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.824 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.824 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.082 aio_bdev 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 403ecf0d-378c-4a3d-a425-86990bc73107 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=403ecf0d-378c-4a3d-a425-86990bc73107 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.082 20:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.341 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 403ecf0d-378c-4a3d-a425-86990bc73107 -t 2000 00:07:26.599 [ 00:07:26.599 { 00:07:26.599 "name": "403ecf0d-378c-4a3d-a425-86990bc73107", 00:07:26.599 "aliases": [ 00:07:26.599 "lvs/lvol" 00:07:26.599 ], 00:07:26.599 "product_name": "Logical Volume", 00:07:26.599 "block_size": 4096, 00:07:26.600 "num_blocks": 38912, 00:07:26.600 "uuid": "403ecf0d-378c-4a3d-a425-86990bc73107", 00:07:26.600 "assigned_rate_limits": { 00:07:26.600 "rw_ios_per_sec": 0, 00:07:26.600 "rw_mbytes_per_sec": 0, 00:07:26.600 "r_mbytes_per_sec": 0, 00:07:26.600 "w_mbytes_per_sec": 0 00:07:26.600 }, 00:07:26.600 "claimed": false, 00:07:26.600 "zoned": false, 00:07:26.600 "supported_io_types": { 00:07:26.600 "read": true, 00:07:26.600 "write": true, 00:07:26.600 "unmap": true, 00:07:26.600 "flush": false, 00:07:26.600 "reset": true, 00:07:26.600 "nvme_admin": false, 00:07:26.600 "nvme_io": false, 00:07:26.600 "nvme_io_md": false, 00:07:26.600 "write_zeroes": true, 00:07:26.600 "zcopy": false, 00:07:26.600 "get_zone_info": false, 00:07:26.600 "zone_management": false, 00:07:26.600 "zone_append": false, 00:07:26.600 "compare": false, 00:07:26.600 "compare_and_write": false, 00:07:26.600 "abort": false, 00:07:26.600 "seek_hole": true, 00:07:26.600 "seek_data": true, 00:07:26.600 "copy": false, 00:07:26.600 "nvme_iov_md": false 00:07:26.600 }, 00:07:26.600 "driver_specific": { 00:07:26.600 "lvol": { 00:07:26.600 "lvol_store_uuid": "780923ff-4289-46a3-9042-ddb8c0e1b074", 00:07:26.600 "base_bdev": "aio_bdev", 00:07:26.600 "thin_provision": false, 00:07:26.600 "num_allocated_clusters": 38, 00:07:26.600 "snapshot": false, 00:07:26.600 "clone": false, 00:07:26.600 "esnap_clone": false 00:07:26.600 } 00:07:26.600 } 00:07:26.600 } 00:07:26.600 ] 00:07:26.600 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:26.600 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:26.600 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:26.858 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:26.858 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:26.858 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:27.116 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:27.116 20:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 403ecf0d-378c-4a3d-a425-86990bc73107 00:07:27.374 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 780923ff-4289-46a3-9042-ddb8c0e1b074 00:07:27.632 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.890 00:07:27.890 real 0m17.878s 00:07:27.890 user 0m17.368s 00:07:27.890 sys 0m1.879s 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 ************************************ 00:07:27.890 END TEST lvs_grow_clean 00:07:27.890 ************************************ 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 ************************************ 00:07:27.890 START TEST lvs_grow_dirty 00:07:27.890 ************************************ 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.890 20:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.148 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:28.148 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:28.719 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e79c02b-4851-4ace-806c-a883b17850a3 00:07:28.720 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:28.720 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:28.720 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:28.720 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:28.720 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e79c02b-4851-4ace-806c-a883b17850a3 lvol 150 00:07:29.286 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=49d1d166-733b-44d1-9370-4c713c386eb0 00:07:29.286 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.286 20:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.286 [2024-11-26 20:48:20.217604] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.286 [2024-11-26 20:48:20.217710] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.286 true 00:07:29.544 20:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:29.544 20:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:29.803 20:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:29.803 20:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.062 20:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49d1d166-733b-44d1-9370-4c713c386eb0 00:07:30.319 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.578 [2024-11-26 20:48:21.405309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.578 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3879363 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3879363 /var/tmp/bdevperf.sock 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3879363 ']' 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.835 20:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.835 [2024-11-26 20:48:21.764772] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:30.835 [2024-11-26 20:48:21.764845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879363 ] 00:07:31.094 [2024-11-26 20:48:21.836465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.094 [2024-11-26 20:48:21.898629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.094 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.094 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:31.094 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.659 Nvme0n1 00:07:31.659 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:31.918 [ 00:07:31.918 { 00:07:31.918 "name": "Nvme0n1", 00:07:31.918 "aliases": [ 00:07:31.918 "49d1d166-733b-44d1-9370-4c713c386eb0" 00:07:31.918 ], 00:07:31.918 "product_name": "NVMe disk", 00:07:31.918 "block_size": 4096, 00:07:31.918 "num_blocks": 38912, 00:07:31.918 "uuid": "49d1d166-733b-44d1-9370-4c713c386eb0", 00:07:31.918 "numa_id": 0, 00:07:31.918 "assigned_rate_limits": { 00:07:31.918 "rw_ios_per_sec": 0, 00:07:31.918 "rw_mbytes_per_sec": 0, 00:07:31.918 "r_mbytes_per_sec": 0, 00:07:31.918 "w_mbytes_per_sec": 0 00:07:31.918 }, 00:07:31.918 "claimed": false, 00:07:31.918 "zoned": false, 00:07:31.918 "supported_io_types": { 00:07:31.918 "read": true, 00:07:31.918 "write": true, 00:07:31.918 "unmap": true, 00:07:31.918 "flush": true, 00:07:31.918 "reset": true, 00:07:31.918 "nvme_admin": true, 00:07:31.918 "nvme_io": true, 00:07:31.918 "nvme_io_md": false, 00:07:31.918 "write_zeroes": true, 00:07:31.918 "zcopy": false, 00:07:31.918 "get_zone_info": false, 00:07:31.918 "zone_management": false, 00:07:31.918 "zone_append": false, 00:07:31.918 "compare": true, 00:07:31.918 "compare_and_write": true, 00:07:31.918 "abort": true, 00:07:31.918 "seek_hole": false, 00:07:31.918 "seek_data": false, 00:07:31.918 "copy": true, 00:07:31.918 "nvme_iov_md": false 00:07:31.918 }, 00:07:31.918 "memory_domains": [ 00:07:31.918 { 00:07:31.918 "dma_device_id": "system", 00:07:31.918 "dma_device_type": 1 00:07:31.918 } 00:07:31.918 ], 00:07:31.918 "driver_specific": { 00:07:31.918 "nvme": [ 00:07:31.918 { 00:07:31.918 "trid": { 00:07:31.918 "trtype": "TCP", 00:07:31.918 "adrfam": "IPv4", 00:07:31.918 "traddr": "10.0.0.2", 00:07:31.918 "trsvcid": "4420", 00:07:31.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:31.918 }, 00:07:31.918 "ctrlr_data": { 00:07:31.918 "cntlid": 1, 00:07:31.918 "vendor_id": "0x8086", 00:07:31.918 "model_number": "SPDK bdev Controller", 00:07:31.918 "serial_number": "SPDK0", 00:07:31.918 "firmware_revision": "25.01", 00:07:31.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.918 "oacs": { 00:07:31.918 "security": 0, 00:07:31.918 "format": 0, 00:07:31.918 "firmware": 0, 00:07:31.918 "ns_manage": 0 00:07:31.918 }, 00:07:31.918 "multi_ctrlr": true, 00:07:31.918 "ana_reporting": false 00:07:31.918 }, 00:07:31.918 "vs": { 00:07:31.918 "nvme_version": "1.3" 00:07:31.918 }, 00:07:31.918 "ns_data": { 00:07:31.918 "id": 1, 00:07:31.918 "can_share": true 00:07:31.918 } 00:07:31.918 } 00:07:31.918 ], 00:07:31.918 "mp_policy": "active_passive" 00:07:31.918 } 00:07:31.918 } 00:07:31.918 ] 00:07:31.918 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3879498 00:07:31.918 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:31.918 20:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:32.177 Running I/O for 10 seconds... 00:07:33.112 Latency(us) 00:07:33.112 [2024-11-26T19:48:24.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.112 Nvme0n1 : 1.00 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:07:33.112 [2024-11-26T19:48:24.050Z] =================================================================================================================== 00:07:33.112 [2024-11-26T19:48:24.050Z] Total : 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:07:33.112 00:07:34.046 20:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:34.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.047 Nvme0n1 : 2.00 14224.50 55.56 0.00 0.00 0.00 0.00 0.00 00:07:34.047 [2024-11-26T19:48:24.985Z] =================================================================================================================== 00:07:34.047 [2024-11-26T19:48:24.985Z] Total : 14224.50 55.56 0.00 0.00 0.00 0.00 0.00 00:07:34.047 00:07:34.304 true 00:07:34.304 20:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:34.304 20:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.563 20:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.563 20:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.563 20:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3879498 00:07:35.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.129 Nvme0n1 : 3.00 14309.00 55.89 0.00 0.00 0.00 0.00 0.00 00:07:35.129 [2024-11-26T19:48:26.067Z] =================================================================================================================== 00:07:35.129 [2024-11-26T19:48:26.067Z] Total : 14309.00 55.89 0.00 0.00 0.00 0.00 0.00 00:07:35.129 00:07:36.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.064 Nvme0n1 : 4.00 14351.25 56.06 0.00 0.00 0.00 0.00 0.00 00:07:36.064 [2024-11-26T19:48:27.002Z] =================================================================================================================== 00:07:36.064 [2024-11-26T19:48:27.002Z] Total : 14351.25 56.06 0.00 0.00 0.00 0.00 0.00 00:07:36.064 00:07:36.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.997 Nvme0n1 : 5.00 14415.60 56.31 0.00 0.00 0.00 0.00 0.00 00:07:36.997 [2024-11-26T19:48:27.935Z] =================================================================================================================== 00:07:36.997 [2024-11-26T19:48:27.935Z] Total : 14415.60 56.31 0.00 0.00 0.00 0.00 0.00 00:07:36.997 00:07:38.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.371 Nvme0n1 : 6.00 14489.50 56.60 0.00 0.00 0.00 0.00 0.00 00:07:38.371 [2024-11-26T19:48:29.309Z] =================================================================================================================== 00:07:38.371 [2024-11-26T19:48:29.309Z] Total : 14489.50 56.60 0.00 0.00 0.00 0.00 0.00 00:07:38.371 00:07:39.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.304 Nvme0n1 : 7.00 14506.00 56.66 0.00 0.00 0.00 0.00 0.00 00:07:39.304 [2024-11-26T19:48:30.242Z] =================================================================================================================== 00:07:39.304 [2024-11-26T19:48:30.242Z] Total : 14506.00 56.66 0.00 0.00 0.00 0.00 0.00 00:07:39.304 00:07:40.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.237 Nvme0n1 : 8.00 14534.62 56.78 0.00 0.00 0.00 0.00 0.00 00:07:40.237 [2024-11-26T19:48:31.175Z] =================================================================================================================== 00:07:40.237 [2024-11-26T19:48:31.175Z] Total : 14534.62 56.78 0.00 0.00 0.00 0.00 0.00 00:07:40.237 00:07:41.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.169 Nvme0n1 : 9.00 14556.89 56.86 0.00 0.00 0.00 0.00 0.00 00:07:41.169 [2024-11-26T19:48:32.107Z] =================================================================================================================== 00:07:41.169 [2024-11-26T19:48:32.107Z] Total : 14556.89 56.86 0.00 0.00 0.00 0.00 0.00 00:07:41.169 00:07:42.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.100 Nvme0n1 : 10.00 14581.40 56.96 0.00 0.00 0.00 0.00 0.00 00:07:42.100 [2024-11-26T19:48:33.038Z] =================================================================================================================== 00:07:42.100 [2024-11-26T19:48:33.038Z] Total : 14581.40 56.96 0.00 0.00 0.00 0.00 0.00 00:07:42.100 00:07:42.100 00:07:42.100 Latency(us) 00:07:42.100 [2024-11-26T19:48:33.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.101 Nvme0n1 : 10.00 14581.32 56.96 0.00 0.00 8772.98 2318.03 17185.00 00:07:42.101 [2024-11-26T19:48:33.039Z] =================================================================================================================== 00:07:42.101 [2024-11-26T19:48:33.039Z] Total : 14581.32 56.96 0.00 0.00 8772.98 2318.03 17185.00 00:07:42.101 { 00:07:42.101 "results": [ 00:07:42.101 { 00:07:42.101 "job": "Nvme0n1", 00:07:42.101 "core_mask": "0x2", 00:07:42.101 "workload": "randwrite", 00:07:42.101 "status": "finished", 00:07:42.101 "queue_depth": 128, 00:07:42.101 "io_size": 4096, 00:07:42.101 "runtime": 10.004447, 00:07:42.101 "iops": 14581.31568891314, 00:07:42.101 "mibps": 56.958264409816955, 00:07:42.101 "io_failed": 0, 00:07:42.101 "io_timeout": 0, 00:07:42.101 "avg_latency_us": 8772.98144550012, 00:07:42.101 "min_latency_us": 2318.0325925925927, 00:07:42.101 "max_latency_us": 17184.995555555557 00:07:42.101 } 00:07:42.101 ], 00:07:42.101 "core_count": 1 00:07:42.101 } 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3879363 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3879363 ']' 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3879363 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879363 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879363' 00:07:42.101 killing process with pid 3879363 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3879363 00:07:42.101 Received shutdown signal, test time was about 10.000000 seconds 00:07:42.101 00:07:42.101 Latency(us) 00:07:42.101 [2024-11-26T19:48:33.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.101 [2024-11-26T19:48:33.039Z] =================================================================================================================== 00:07:42.101 [2024-11-26T19:48:33.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:42.101 20:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3879363 00:07:42.358 20:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.614 20:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.871 20:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:42.871 20:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:43.128 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:43.128 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:43.128 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3876117 00:07:43.128 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3876117 00:07:43.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3876117 Killed "${NVMF_APP[@]}" "$@" 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3880835 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3880835 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3880835 ']' 00:07:43.386 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.387 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.387 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.387 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.387 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.387 [2024-11-26 20:48:34.124401] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:43.387 [2024-11-26 20:48:34.124497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.387 [2024-11-26 20:48:34.197808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.387 [2024-11-26 20:48:34.255098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.387 [2024-11-26 20:48:34.255168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.387 [2024-11-26 20:48:34.255181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.387 [2024-11-26 20:48:34.255192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.387 [2024-11-26 20:48:34.255202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.387 [2024-11-26 20:48:34.255807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.644 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.902 [2024-11-26 20:48:34.654244] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:43.902 [2024-11-26 20:48:34.654396] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:43.902 [2024-11-26 20:48:34.654465] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 49d1d166-733b-44d1-9370-4c713c386eb0 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=49d1d166-733b-44d1-9370-4c713c386eb0 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.902 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:44.160 20:48:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49d1d166-733b-44d1-9370-4c713c386eb0 -t 2000 00:07:44.418 [ 00:07:44.418 { 00:07:44.418 "name": "49d1d166-733b-44d1-9370-4c713c386eb0", 00:07:44.418 "aliases": [ 00:07:44.418 "lvs/lvol" 00:07:44.418 ], 00:07:44.418 "product_name": "Logical Volume", 00:07:44.418 "block_size": 4096, 00:07:44.418 "num_blocks": 38912, 00:07:44.418 "uuid": "49d1d166-733b-44d1-9370-4c713c386eb0", 00:07:44.418 "assigned_rate_limits": { 00:07:44.418 "rw_ios_per_sec": 0, 00:07:44.418 "rw_mbytes_per_sec": 0, 00:07:44.418 "r_mbytes_per_sec": 0, 00:07:44.418 "w_mbytes_per_sec": 0 00:07:44.418 }, 00:07:44.418 "claimed": false, 00:07:44.418 "zoned": false, 00:07:44.418 "supported_io_types": { 00:07:44.418 "read": true, 00:07:44.418 "write": true, 00:07:44.418 "unmap": true, 00:07:44.418 "flush": false, 00:07:44.418 "reset": true, 00:07:44.418 "nvme_admin": false, 00:07:44.418 "nvme_io": false, 00:07:44.418 "nvme_io_md": false, 00:07:44.418 "write_zeroes": true, 00:07:44.418 "zcopy": false, 00:07:44.418 "get_zone_info": false, 00:07:44.418 "zone_management": false, 00:07:44.418 "zone_append": false, 00:07:44.418 "compare": false, 00:07:44.418 "compare_and_write": false, 00:07:44.418 "abort": false, 00:07:44.418 "seek_hole": true, 00:07:44.418 "seek_data": true, 00:07:44.418 "copy": false, 00:07:44.418 "nvme_iov_md": false 00:07:44.418 }, 00:07:44.418 "driver_specific": { 00:07:44.418 "lvol": { 00:07:44.418 "lvol_store_uuid": "8e79c02b-4851-4ace-806c-a883b17850a3", 00:07:44.418 "base_bdev": "aio_bdev", 00:07:44.418 "thin_provision": false, 00:07:44.418 "num_allocated_clusters": 38, 00:07:44.418 "snapshot": false, 00:07:44.418 "clone": false, 00:07:44.418 "esnap_clone": false 00:07:44.418 } 00:07:44.418 } 00:07:44.418 } 00:07:44.418 ] 00:07:44.418 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:44.418 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:44.418 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:44.677 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:44.677 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:44.677 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:44.935 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:44.935 20:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.200 [2024-11-26 20:48:36.131911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.458 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:45.717 request: 00:07:45.717 { 00:07:45.717 "uuid": "8e79c02b-4851-4ace-806c-a883b17850a3", 00:07:45.717 "method": "bdev_lvol_get_lvstores", 00:07:45.717 "req_id": 1 00:07:45.717 } 00:07:45.717 Got JSON-RPC error response 00:07:45.717 response: 00:07:45.717 { 00:07:45.717 "code": -19, 00:07:45.717 "message": "No such device" 00:07:45.717 } 00:07:45.717 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:45.717 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.717 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.717 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.717 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.975 aio_bdev 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 49d1d166-733b-44d1-9370-4c713c386eb0 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=49d1d166-733b-44d1-9370-4c713c386eb0 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.975 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.233 20:48:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49d1d166-733b-44d1-9370-4c713c386eb0 -t 2000 00:07:46.491 [ 00:07:46.491 { 00:07:46.491 "name": "49d1d166-733b-44d1-9370-4c713c386eb0", 00:07:46.491 "aliases": [ 00:07:46.491 "lvs/lvol" 00:07:46.491 ], 00:07:46.491 "product_name": "Logical Volume", 00:07:46.491 "block_size": 4096, 00:07:46.491 "num_blocks": 38912, 00:07:46.491 "uuid": "49d1d166-733b-44d1-9370-4c713c386eb0", 00:07:46.491 "assigned_rate_limits": { 00:07:46.491 "rw_ios_per_sec": 0, 00:07:46.491 "rw_mbytes_per_sec": 0, 00:07:46.491 "r_mbytes_per_sec": 0, 00:07:46.491 "w_mbytes_per_sec": 0 00:07:46.491 }, 00:07:46.491 "claimed": false, 00:07:46.491 "zoned": false, 00:07:46.491 "supported_io_types": { 00:07:46.491 "read": true, 00:07:46.491 "write": true, 00:07:46.491 "unmap": true, 00:07:46.491 "flush": false, 00:07:46.491 "reset": true, 00:07:46.491 "nvme_admin": false, 00:07:46.491 "nvme_io": false, 00:07:46.491 "nvme_io_md": false, 00:07:46.491 "write_zeroes": true, 00:07:46.491 "zcopy": false, 00:07:46.491 "get_zone_info": false, 00:07:46.491 "zone_management": false, 00:07:46.491 "zone_append": false, 00:07:46.491 "compare": false, 00:07:46.491 "compare_and_write": false, 00:07:46.491 "abort": false, 00:07:46.491 "seek_hole": true, 00:07:46.491 "seek_data": true, 00:07:46.491 "copy": false, 00:07:46.491 "nvme_iov_md": false 00:07:46.491 }, 00:07:46.491 "driver_specific": { 00:07:46.491 "lvol": { 00:07:46.491 "lvol_store_uuid": "8e79c02b-4851-4ace-806c-a883b17850a3", 00:07:46.491 "base_bdev": "aio_bdev", 00:07:46.491 "thin_provision": false, 00:07:46.491 "num_allocated_clusters": 38, 00:07:46.491 "snapshot": false, 00:07:46.491 "clone": false, 00:07:46.491 "esnap_clone": false 00:07:46.491 } 00:07:46.491 } 00:07:46.491 } 00:07:46.491 ] 00:07:46.491 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:46.491 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:46.491 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:46.749 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:46.749 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:46.749 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:47.007 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:47.007 20:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 49d1d166-733b-44d1-9370-4c713c386eb0 00:07:47.265 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e79c02b-4851-4ace-806c-a883b17850a3 00:07:47.523 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.781 00:07:47.781 real 0m19.863s 00:07:47.781 user 0m48.695s 00:07:47.781 sys 0m5.143s 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.781 ************************************ 00:07:47.781 END TEST lvs_grow_dirty 00:07:47.781 ************************************ 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:47.781 nvmf_trace.0 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.781 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.781 rmmod nvme_tcp 00:07:48.040 rmmod nvme_fabrics 00:07:48.040 rmmod nvme_keyring 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3880835 ']' 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3880835 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3880835 ']' 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3880835 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880835 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880835' 00:07:48.040 killing process with pid 3880835 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3880835 00:07:48.040 20:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3880835 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.298 20:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.202 00:07:50.202 real 0m43.300s 00:07:50.202 user 1m12.245s 00:07:50.202 sys 0m8.992s 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.202 ************************************ 00:07:50.202 END TEST nvmf_lvs_grow 00:07:50.202 ************************************ 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.202 ************************************ 00:07:50.202 START TEST nvmf_bdev_io_wait 00:07:50.202 ************************************ 00:07:50.202 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.461 * Looking for test storage... 00:07:50.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.461 --rc genhtml_branch_coverage=1 00:07:50.461 --rc genhtml_function_coverage=1 00:07:50.461 --rc genhtml_legend=1 00:07:50.461 --rc geninfo_all_blocks=1 00:07:50.461 --rc geninfo_unexecuted_blocks=1 00:07:50.461 00:07:50.461 ' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.461 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.462 20:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:52.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:52.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:52.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.363 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:52.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:07:52.622 00:07:52.622 --- 10.0.0.2 ping statistics --- 00:07:52.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.622 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:52.622 00:07:52.622 --- 10.0.0.1 ping statistics --- 00:07:52.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.622 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3883380 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3883380 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3883380 ']' 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.622 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.622 [2024-11-26 20:48:43.505593] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:52.622 [2024-11-26 20:48:43.505697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.881 [2024-11-26 20:48:43.583133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.881 [2024-11-26 20:48:43.647333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.881 [2024-11-26 20:48:43.647397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.881 [2024-11-26 20:48:43.647423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.881 [2024-11-26 20:48:43.647436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.881 [2024-11-26 20:48:43.647448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.881 [2024-11-26 20:48:43.649169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.881 [2024-11-26 20:48:43.649238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.881 [2024-11-26 20:48:43.649327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.881 [2024-11-26 20:48:43.649330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.881 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 [2024-11-26 20:48:43.830665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.147 Malloc0 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.147 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.148 [2024-11-26 20:48:43.882345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3883517 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3883519 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.148 { 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme$subsystem", 00:07:53.148 "trtype": "$TEST_TRANSPORT", 00:07:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "$NVMF_PORT", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.148 "hdgst": ${hdgst:-false}, 00:07:53.148 "ddgst": ${ddgst:-false} 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 } 00:07:53.148 EOF 00:07:53.148 )") 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3883521 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.148 { 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme$subsystem", 00:07:53.148 "trtype": "$TEST_TRANSPORT", 00:07:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "$NVMF_PORT", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.148 "hdgst": ${hdgst:-false}, 00:07:53.148 "ddgst": ${ddgst:-false} 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 } 00:07:53.148 EOF 00:07:53.148 )") 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3883524 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.148 { 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme$subsystem", 00:07:53.148 "trtype": "$TEST_TRANSPORT", 00:07:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "$NVMF_PORT", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.148 "hdgst": ${hdgst:-false}, 00:07:53.148 "ddgst": ${ddgst:-false} 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 } 00:07:53.148 EOF 00:07:53.148 )") 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.148 { 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme$subsystem", 00:07:53.148 "trtype": "$TEST_TRANSPORT", 00:07:53.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "$NVMF_PORT", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.148 "hdgst": ${hdgst:-false}, 00:07:53.148 "ddgst": ${ddgst:-false} 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 } 00:07:53.148 EOF 00:07:53.148 )") 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3883517 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme1", 00:07:53.148 "trtype": "tcp", 00:07:53.148 "traddr": "10.0.0.2", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "4420", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.148 "hdgst": false, 00:07:53.148 "ddgst": false 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 }' 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme1", 00:07:53.148 "trtype": "tcp", 00:07:53.148 "traddr": "10.0.0.2", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "4420", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.148 "hdgst": false, 00:07:53.148 "ddgst": false 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 }' 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme1", 00:07:53.148 "trtype": "tcp", 00:07:53.148 "traddr": "10.0.0.2", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "4420", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.148 "hdgst": false, 00:07:53.148 "ddgst": false 00:07:53.148 }, 00:07:53.148 "method": "bdev_nvme_attach_controller" 00:07:53.148 }' 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.148 20:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.148 "params": { 00:07:53.148 "name": "Nvme1", 00:07:53.148 "trtype": "tcp", 00:07:53.148 "traddr": "10.0.0.2", 00:07:53.148 "adrfam": "ipv4", 00:07:53.148 "trsvcid": "4420", 00:07:53.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.149 "hdgst": false, 00:07:53.149 "ddgst": false 00:07:53.149 }, 00:07:53.149 "method": "bdev_nvme_attach_controller" 00:07:53.149 }' 00:07:53.149 [2024-11-26 20:48:43.932339] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:53.149 [2024-11-26 20:48:43.932340] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:53.149 [2024-11-26 20:48:43.932339] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:53.149 [2024-11-26 20:48:43.932436] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:48:43.932436] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:48:43.932436] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:53.149 --proc-type=auto ] 00:07:53.149 --proc-type=auto ] 00:07:53.149 [2024-11-26 20:48:43.932797] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:07:53.149 [2024-11-26 20:48:43.932856] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:53.487 [2024-11-26 20:48:44.107167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.487 [2024-11-26 20:48:44.161320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:53.487 [2024-11-26 20:48:44.207462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.487 [2024-11-26 20:48:44.260941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:53.487 [2024-11-26 20:48:44.306374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.487 [2024-11-26 20:48:44.361476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:53.769 [2024-11-26 20:48:44.408225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.769 [2024-11-26 20:48:44.459192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.769 Running I/O for 1 seconds... 00:07:53.769 Running I/O for 1 seconds... 00:07:53.769 Running I/O for 1 seconds... 00:07:53.769 Running I/O for 1 seconds... 00:07:54.706 9553.00 IOPS, 37.32 MiB/s [2024-11-26T19:48:45.644Z] 9334.00 IOPS, 36.46 MiB/s 00:07:54.706 Latency(us) 00:07:54.706 [2024-11-26T19:48:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.706 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:54.706 Nvme1n1 : 1.01 9598.68 37.49 0.00 0.00 13273.70 7524.50 19612.25 00:07:54.706 [2024-11-26T19:48:45.644Z] =================================================================================================================== 00:07:54.706 [2024-11-26T19:48:45.644Z] Total : 9598.68 37.49 0.00 0.00 13273.70 7524.50 19612.25 00:07:54.706 00:07:54.706 Latency(us) 00:07:54.706 [2024-11-26T19:48:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.706 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:54.706 Nvme1n1 : 1.01 9399.87 36.72 0.00 0.00 13563.17 5582.70 24175.50 00:07:54.706 [2024-11-26T19:48:45.644Z] =================================================================================================================== 00:07:54.706 [2024-11-26T19:48:45.644Z] Total : 9399.87 36.72 0.00 0.00 13563.17 5582.70 24175.50 00:07:54.965 8298.00 IOPS, 32.41 MiB/s 00:07:54.965 Latency(us) 00:07:54.965 [2024-11-26T19:48:45.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.965 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:54.965 Nvme1n1 : 1.01 8372.46 32.70 0.00 0.00 15221.77 6043.88 27185.30 00:07:54.965 [2024-11-26T19:48:45.903Z] =================================================================================================================== 00:07:54.965 [2024-11-26T19:48:45.903Z] Total : 8372.46 32.70 0.00 0.00 15221.77 6043.88 27185.30 00:07:54.965 137992.00 IOPS, 539.03 MiB/s 00:07:54.965 Latency(us) 00:07:54.965 [2024-11-26T19:48:45.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.965 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:54.965 Nvme1n1 : 1.00 137727.03 538.00 0.00 0.00 924.01 295.82 1905.40 00:07:54.965 [2024-11-26T19:48:45.903Z] =================================================================================================================== 00:07:54.965 [2024-11-26T19:48:45.903Z] Total : 137727.03 538.00 0.00 0.00 924.01 295.82 1905.40 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3883519 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3883521 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3883524 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.965 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.224 rmmod nvme_tcp 00:07:55.224 rmmod nvme_fabrics 00:07:55.224 rmmod nvme_keyring 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3883380 ']' 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3883380 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3883380 ']' 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3883380 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883380 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883380' 00:07:55.224 killing process with pid 3883380 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3883380 00:07:55.224 20:48:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3883380 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.483 20:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.387 00:07:57.387 real 0m7.120s 00:07:57.387 user 0m15.005s 00:07:57.387 sys 0m3.886s 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.387 ************************************ 00:07:57.387 END TEST nvmf_bdev_io_wait 00:07:57.387 ************************************ 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.387 ************************************ 00:07:57.387 START TEST nvmf_queue_depth 00:07:57.387 ************************************ 00:07:57.387 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.646 * Looking for test storage... 00:07:57.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.646 --rc genhtml_branch_coverage=1 00:07:57.646 --rc genhtml_function_coverage=1 00:07:57.646 --rc genhtml_legend=1 00:07:57.646 --rc geninfo_all_blocks=1 00:07:57.646 --rc geninfo_unexecuted_blocks=1 00:07:57.646 00:07:57.646 ' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.646 --rc genhtml_branch_coverage=1 00:07:57.646 --rc genhtml_function_coverage=1 00:07:57.646 --rc genhtml_legend=1 00:07:57.646 --rc geninfo_all_blocks=1 00:07:57.646 --rc geninfo_unexecuted_blocks=1 00:07:57.646 00:07:57.646 ' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.646 --rc genhtml_branch_coverage=1 00:07:57.646 --rc genhtml_function_coverage=1 00:07:57.646 --rc genhtml_legend=1 00:07:57.646 --rc geninfo_all_blocks=1 00:07:57.646 --rc geninfo_unexecuted_blocks=1 00:07:57.646 00:07:57.646 ' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.646 --rc genhtml_branch_coverage=1 00:07:57.646 --rc genhtml_function_coverage=1 00:07:57.646 --rc genhtml_legend=1 00:07:57.646 --rc geninfo_all_blocks=1 00:07:57.646 --rc geninfo_unexecuted_blocks=1 00:07:57.646 00:07:57.646 ' 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.646 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.647 20:48:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.177 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:00.178 00:08:00.178 --- 10.0.0.2 ping statistics --- 00:08:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.178 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:00.178 00:08:00.178 --- 10.0.0.1 ping statistics --- 00:08:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.178 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3885759 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3885759 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3885759 ']' 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.178 20:48:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.178 [2024-11-26 20:48:50.840391] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:00.178 [2024-11-26 20:48:50.840476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.178 [2024-11-26 20:48:50.929439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.178 [2024-11-26 20:48:50.991003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.178 [2024-11-26 20:48:50.991068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.178 [2024-11-26 20:48:50.991084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.178 [2024-11-26 20:48:50.991097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.178 [2024-11-26 20:48:50.991109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.178 [2024-11-26 20:48:50.991767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 [2024-11-26 20:48:51.149792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 Malloc0 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 [2024-11-26 20:48:51.200497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3885783 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3885783 /var/tmp/bdevperf.sock 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3885783 ']' 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.437 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 [2024-11-26 20:48:51.251018] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:00.437 [2024-11-26 20:48:51.251095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885783 ] 00:08:00.437 [2024-11-26 20:48:51.321762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.695 [2024-11-26 20:48:51.384575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.695 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.695 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.695 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:00.695 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.695 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.953 NVMe0n1 00:08:00.953 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.953 20:48:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.953 Running I/O for 10 seconds... 00:08:03.269 7713.00 IOPS, 30.13 MiB/s [2024-11-26T19:48:55.141Z] 7858.50 IOPS, 30.70 MiB/s [2024-11-26T19:48:56.076Z] 7880.67 IOPS, 30.78 MiB/s [2024-11-26T19:48:57.008Z] 7976.25 IOPS, 31.16 MiB/s [2024-11-26T19:48:57.984Z] 8008.80 IOPS, 31.28 MiB/s [2024-11-26T19:48:58.913Z] 8035.33 IOPS, 31.39 MiB/s [2024-11-26T19:49:00.285Z] 8065.29 IOPS, 31.51 MiB/s [2024-11-26T19:49:01.217Z] 8120.25 IOPS, 31.72 MiB/s [2024-11-26T19:49:02.150Z] 8167.33 IOPS, 31.90 MiB/s [2024-11-26T19:49:02.150Z] 8234.90 IOPS, 32.17 MiB/s 00:08:11.212 Latency(us) 00:08:11.212 [2024-11-26T19:49:02.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.212 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:11.212 Verification LBA range: start 0x0 length 0x4000 00:08:11.212 NVMe0n1 : 10.09 8265.83 32.29 0.00 0.00 123281.27 23398.78 77283.93 00:08:11.212 [2024-11-26T19:49:02.150Z] =================================================================================================================== 00:08:11.212 [2024-11-26T19:49:02.150Z] Total : 8265.83 32.29 0.00 0.00 123281.27 23398.78 77283.93 00:08:11.212 { 00:08:11.212 "results": [ 00:08:11.212 { 00:08:11.212 "job": "NVMe0n1", 00:08:11.212 "core_mask": "0x1", 00:08:11.212 "workload": "verify", 00:08:11.212 "status": "finished", 00:08:11.212 "verify_range": { 00:08:11.212 "start": 0, 00:08:11.212 "length": 16384 00:08:11.212 }, 00:08:11.212 "queue_depth": 1024, 00:08:11.212 "io_size": 4096, 00:08:11.212 "runtime": 10.08647, 00:08:11.212 "iops": 8265.825407699622, 00:08:11.212 "mibps": 32.28838049882665, 00:08:11.212 "io_failed": 0, 00:08:11.212 "io_timeout": 0, 00:08:11.212 "avg_latency_us": 123281.27248144549, 00:08:11.212 "min_latency_us": 23398.77925925926, 00:08:11.212 "max_latency_us": 77283.93481481481 00:08:11.212 } 00:08:11.212 ], 00:08:11.212 "core_count": 1 00:08:11.212 } 00:08:11.212 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3885783 00:08:11.212 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3885783 ']' 00:08:11.212 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3885783 00:08:11.212 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.212 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.213 20:49:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885783 00:08:11.213 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.213 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.213 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885783' 00:08:11.213 killing process with pid 3885783 00:08:11.213 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3885783 00:08:11.213 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.213 00:08:11.213 Latency(us) 00:08:11.213 [2024-11-26T19:49:02.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.213 [2024-11-26T19:49:02.151Z] =================================================================================================================== 00:08:11.213 [2024-11-26T19:49:02.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.213 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3885783 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.471 rmmod nvme_tcp 00:08:11.471 rmmod nvme_fabrics 00:08:11.471 rmmod nvme_keyring 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3885759 ']' 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3885759 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3885759 ']' 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3885759 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885759 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885759' 00:08:11.471 killing process with pid 3885759 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3885759 00:08:11.471 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3885759 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.730 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.264 00:08:14.264 real 0m16.378s 00:08:14.264 user 0m23.080s 00:08:14.264 sys 0m3.020s 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.264 ************************************ 00:08:14.264 END TEST nvmf_queue_depth 00:08:14.264 ************************************ 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.264 ************************************ 00:08:14.264 START TEST nvmf_target_multipath 00:08:14.264 ************************************ 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.264 * Looking for test storage... 00:08:14.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.264 --rc genhtml_branch_coverage=1 00:08:14.264 --rc genhtml_function_coverage=1 00:08:14.264 --rc genhtml_legend=1 00:08:14.264 --rc geninfo_all_blocks=1 00:08:14.264 --rc geninfo_unexecuted_blocks=1 00:08:14.264 00:08:14.264 ' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.264 --rc genhtml_branch_coverage=1 00:08:14.264 --rc genhtml_function_coverage=1 00:08:14.264 --rc genhtml_legend=1 00:08:14.264 --rc geninfo_all_blocks=1 00:08:14.264 --rc geninfo_unexecuted_blocks=1 00:08:14.264 00:08:14.264 ' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.264 --rc genhtml_branch_coverage=1 00:08:14.264 --rc genhtml_function_coverage=1 00:08:14.264 --rc genhtml_legend=1 00:08:14.264 --rc geninfo_all_blocks=1 00:08:14.264 --rc geninfo_unexecuted_blocks=1 00:08:14.264 00:08:14.264 ' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.264 --rc genhtml_branch_coverage=1 00:08:14.264 --rc genhtml_function_coverage=1 00:08:14.264 --rc genhtml_legend=1 00:08:14.264 --rc geninfo_all_blocks=1 00:08:14.264 --rc geninfo_unexecuted_blocks=1 00:08:14.264 00:08:14.264 ' 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.264 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.265 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.168 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.168 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.168 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.168 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.169 20:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.169 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:08:16.429 00:08:16.429 --- 10.0.0.2 ping statistics --- 00:08:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.429 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:16.429 00:08:16.429 --- 10.0.0.1 ping statistics --- 00:08:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.429 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:16.429 only one NIC for nvmf test 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.429 rmmod nvme_tcp 00:08:16.429 rmmod nvme_fabrics 00:08:16.429 rmmod nvme_keyring 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.429 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.430 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.331 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.590 00:08:18.590 real 0m4.557s 00:08:18.590 user 0m0.926s 00:08:18.590 sys 0m1.638s 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:18.590 ************************************ 00:08:18.590 END TEST nvmf_target_multipath 00:08:18.590 ************************************ 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.590 ************************************ 00:08:18.590 START TEST nvmf_zcopy 00:08:18.590 ************************************ 00:08:18.590 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.590 * Looking for test storage... 00:08:18.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.591 --rc genhtml_branch_coverage=1 00:08:18.591 --rc genhtml_function_coverage=1 00:08:18.591 --rc genhtml_legend=1 00:08:18.591 --rc geninfo_all_blocks=1 00:08:18.591 --rc geninfo_unexecuted_blocks=1 00:08:18.591 00:08:18.591 ' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.591 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.592 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.122 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:21.123 00:08:21.123 --- 10.0.0.2 ping statistics --- 00:08:21.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.123 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:21.123 00:08:21.123 --- 10.0.0.1 ping statistics --- 00:08:21.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.123 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3890999 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3890999 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3890999 ']' 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.123 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.123 [2024-11-26 20:49:11.837261] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:21.123 [2024-11-26 20:49:11.837332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.123 [2024-11-26 20:49:11.913367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.123 [2024-11-26 20:49:11.976672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.123 [2024-11-26 20:49:11.976780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.123 [2024-11-26 20:49:11.976795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.123 [2024-11-26 20:49:11.976806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.123 [2024-11-26 20:49:11.976815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.123 [2024-11-26 20:49:11.977472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 [2024-11-26 20:49:12.143053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 [2024-11-26 20:49:12.159327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 malloc0 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:21.381 { 00:08:21.381 "params": { 00:08:21.381 "name": "Nvme$subsystem", 00:08:21.381 "trtype": "$TEST_TRANSPORT", 00:08:21.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:21.381 "adrfam": "ipv4", 00:08:21.381 "trsvcid": "$NVMF_PORT", 00:08:21.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:21.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:21.381 "hdgst": ${hdgst:-false}, 00:08:21.381 "ddgst": ${ddgst:-false} 00:08:21.381 }, 00:08:21.381 "method": "bdev_nvme_attach_controller" 00:08:21.381 } 00:08:21.381 EOF 00:08:21.381 )") 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:21.381 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:21.381 "params": { 00:08:21.382 "name": "Nvme1", 00:08:21.382 "trtype": "tcp", 00:08:21.382 "traddr": "10.0.0.2", 00:08:21.382 "adrfam": "ipv4", 00:08:21.382 "trsvcid": "4420", 00:08:21.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:21.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:21.382 "hdgst": false, 00:08:21.382 "ddgst": false 00:08:21.382 }, 00:08:21.382 "method": "bdev_nvme_attach_controller" 00:08:21.382 }' 00:08:21.382 [2024-11-26 20:49:12.244571] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:21.382 [2024-11-26 20:49:12.244640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891035 ] 00:08:21.382 [2024-11-26 20:49:12.317287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.640 [2024-11-26 20:49:12.381935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.897 Running I/O for 10 seconds... 00:08:24.203 5355.00 IOPS, 41.84 MiB/s [2024-11-26T19:49:16.073Z] 5393.50 IOPS, 42.14 MiB/s [2024-11-26T19:49:17.003Z] 5441.67 IOPS, 42.51 MiB/s [2024-11-26T19:49:17.935Z] 5445.00 IOPS, 42.54 MiB/s [2024-11-26T19:49:18.886Z] 5446.20 IOPS, 42.55 MiB/s [2024-11-26T19:49:19.844Z] 5440.83 IOPS, 42.51 MiB/s [2024-11-26T19:49:20.776Z] 5429.86 IOPS, 42.42 MiB/s [2024-11-26T19:49:22.147Z] 5428.00 IOPS, 42.41 MiB/s [2024-11-26T19:49:23.080Z] 5426.44 IOPS, 42.39 MiB/s [2024-11-26T19:49:23.080Z] 5431.00 IOPS, 42.43 MiB/s 00:08:32.142 Latency(us) 00:08:32.142 [2024-11-26T19:49:23.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.142 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:32.142 Verification LBA range: start 0x0 length 0x1000 00:08:32.142 Nvme1n1 : 10.02 5432.62 42.44 0.00 0.00 23496.60 1686.95 33399.09 00:08:32.142 [2024-11-26T19:49:23.080Z] =================================================================================================================== 00:08:32.142 [2024-11-26T19:49:23.080Z] Total : 5432.62 42.44 0.00 0.00 23496.60 1686.95 33399.09 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3892341 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:32.142 { 00:08:32.142 "params": { 00:08:32.142 "name": "Nvme$subsystem", 00:08:32.142 "trtype": "$TEST_TRANSPORT", 00:08:32.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.142 "adrfam": "ipv4", 00:08:32.142 "trsvcid": "$NVMF_PORT", 00:08:32.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.142 "hdgst": ${hdgst:-false}, 00:08:32.142 "ddgst": ${ddgst:-false} 00:08:32.142 }, 00:08:32.142 "method": "bdev_nvme_attach_controller" 00:08:32.142 } 00:08:32.142 EOF 00:08:32.142 )") 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:32.142 [2024-11-26 20:49:23.024615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.142 [2024-11-26 20:49:23.024664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:32.142 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:32.142 "params": { 00:08:32.142 "name": "Nvme1", 00:08:32.142 "trtype": "tcp", 00:08:32.142 "traddr": "10.0.0.2", 00:08:32.142 "adrfam": "ipv4", 00:08:32.142 "trsvcid": "4420", 00:08:32.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.142 "hdgst": false, 00:08:32.142 "ddgst": false 00:08:32.142 }, 00:08:32.142 "method": "bdev_nvme_attach_controller" 00:08:32.142 }' 00:08:32.142 [2024-11-26 20:49:23.032573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.142 [2024-11-26 20:49:23.032604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.142 [2024-11-26 20:49:23.040593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.142 [2024-11-26 20:49:23.040623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.142 [2024-11-26 20:49:23.048615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.142 [2024-11-26 20:49:23.048646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.143 [2024-11-26 20:49:23.056635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.143 [2024-11-26 20:49:23.056665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.143 [2024-11-26 20:49:23.067912] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:32.143 [2024-11-26 20:49:23.068001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892341 ] 00:08:32.143 [2024-11-26 20:49:23.068673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.143 [2024-11-26 20:49:23.068714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.143 [2024-11-26 20:49:23.076702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.143 [2024-11-26 20:49:23.076733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.084723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.084754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.092746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.092776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.100781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.100808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.108800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.108827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.116812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.116838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.124833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.124863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.132842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.132870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.140647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.400 [2024-11-26 20:49:23.140865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.140892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.148928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.148993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.156950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.157011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.164929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.164955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.172951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.172992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.180983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.181007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.189005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.189045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.197036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.197066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.205060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.400 [2024-11-26 20:49:23.205090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.400 [2024-11-26 20:49:23.206610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.400 [2024-11-26 20:49:23.213084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.213114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.221119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.221151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.229158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.229207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.237180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.237227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.245205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.245252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.253228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.253276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.261251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.261301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.269269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.269318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.277257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.277287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.285325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.285376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.293338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.293389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.301356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.301402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.309347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.309378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.317367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.317397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.325388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.325418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.401 [2024-11-26 20:49:23.333414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.401 [2024-11-26 20:49:23.333445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.341436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.341467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.349456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.349486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.357478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.357509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.365505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.365535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.373528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.373559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.381551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.381580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.389573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.389603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.397597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.397626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.405622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.405651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.413647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.413677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.421669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.421709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.429704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.429745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.437742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.437768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.445770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.445795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.453782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.453808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.461794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.461819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.469814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.469839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.477824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.477849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.485845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.485870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.493883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.493908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.501896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.501921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.509919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.509945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.517938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.517964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 Running I/O for 5 seconds... 00:08:32.658 [2024-11-26 20:49:23.525979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.526004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.541513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.541548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.553336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.553371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.565448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.565479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.577489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.577522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.658 [2024-11-26 20:49:23.589678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.658 [2024-11-26 20:49:23.589721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.601522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.601555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.613060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.613100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.624773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.624801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.636623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.636663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.650370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.650401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.661670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.661715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.673244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.673276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.685251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.685282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.696739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.696767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.708097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.708129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.721530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.721561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.732214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.732244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.743609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.743640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.754964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.755007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.766620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.766650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.778183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.778213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.789793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.789822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.801344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.801374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.815172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.815204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.826107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.826137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.837913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.837941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.916 [2024-11-26 20:49:23.849905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.916 [2024-11-26 20:49:23.849932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.860985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.861040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.872323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.872353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.883950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.883978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.895297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.895327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.907395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.907427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.919581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.919613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.930928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.930956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.942614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.942644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.953901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.953929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.967515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.967545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.978132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.978163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:23.990076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:23.990107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.001459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.001489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.013167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.013198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.024869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.024898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.036622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.036654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.050372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.050403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.061138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.061169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.073114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.073146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.084915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.084958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.098243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.098275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.176 [2024-11-26 20:49:24.109330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.176 [2024-11-26 20:49:24.109363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.120966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.121009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.132136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.132168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.144099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.144129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.155453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.155484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.168970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.168998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.179039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.179070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.191429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.191460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.203145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.203176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.214630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.214660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.228118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.228149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.238892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.238920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.250204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.250234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.261598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.261628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.272836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.272864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.286082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.286113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.296464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.296495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.308174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.308205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.319482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.319513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.331247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.331278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.342480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.342511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.354220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.354252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.436 [2024-11-26 20:49:24.365847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.436 [2024-11-26 20:49:24.365875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.377551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.377583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.388879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.388908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.402594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.402625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.413661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.413704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.425594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.425624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.436880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.436908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.448897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.448925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.460780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.460809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.472403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.472433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.484165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.484197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.496325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.496355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.507929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.507968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.519197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.519227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 10883.00 IOPS, 85.02 MiB/s [2024-11-26T19:49:24.633Z] [2024-11-26 20:49:24.531103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.531134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.543132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.543162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.554700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.554730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.565761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.565789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.577160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.695 [2024-11-26 20:49:24.577191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.695 [2024-11-26 20:49:24.588676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.696 [2024-11-26 20:49:24.588718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.696 [2024-11-26 20:49:24.599877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.696 [2024-11-26 20:49:24.599906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.696 [2024-11-26 20:49:24.613209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.696 [2024-11-26 20:49:24.613241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.696 [2024-11-26 20:49:24.623755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.696 [2024-11-26 20:49:24.623790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.636489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.636520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.648260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.648290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.660068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.660099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.673830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.673858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.684931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.684959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.696341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.696372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.709863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.709890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.720669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.720712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.733008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.953 [2024-11-26 20:49:24.733052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.953 [2024-11-26 20:49:24.744759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.744788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.756418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.756449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.767819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.767847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.779450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.779481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.790505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.790536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.801896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.801925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.815281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.815312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.826229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.826260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.837831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.837860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.849385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.849416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.860803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.860843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.872551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.872581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.954 [2024-11-26 20:49:24.884483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.954 [2024-11-26 20:49:24.884515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.898127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.898159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.909320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.909351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.921557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.921588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.933164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.933196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.944575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.944606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.956385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.956416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.967671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.967724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.979464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.979496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:24.991391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:24.991422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.005065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.005097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.015899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.015927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.027497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.027528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.039214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.039245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.052836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.052864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.063883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.063911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.075253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.075283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.088669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.088732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.099766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.099797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.111652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.111683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.123356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.123388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.136771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.136799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.212 [2024-11-26 20:49:25.147220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.212 [2024-11-26 20:49:25.147251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.159112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.159143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.170560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.170592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.184168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.184199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.195239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.195278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.206865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.206893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.218262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.470 [2024-11-26 20:49:25.218293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.470 [2024-11-26 20:49:25.230046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.230077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.241373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.241403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.252826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.252855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.264108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.264139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.275673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.275715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.289050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.289081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.299739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.299767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.311250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.311280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.322699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.322729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.334334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.334364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.346218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.346249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.357872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.357900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.369299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.369329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.380864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.380892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.392282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.392312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.471 [2024-11-26 20:49:25.403880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.471 [2024-11-26 20:49:25.403908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.728 [2024-11-26 20:49:25.415472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.728 [2024-11-26 20:49:25.415512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.728 [2024-11-26 20:49:25.426720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.728 [2024-11-26 20:49:25.426767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.728 [2024-11-26 20:49:25.438569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.728 [2024-11-26 20:49:25.438599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.449817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.449845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.461174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.461204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.472779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.472807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.484367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.484397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.496155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.496185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.507552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.507582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.519203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.519234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 10935.00 IOPS, 85.43 MiB/s [2024-11-26T19:49:25.667Z] [2024-11-26 20:49:25.532761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.532789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.543384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.543414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.554873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.554902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.566479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.566510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.578402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.578433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.589615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.589646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.601656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.601695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.613430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.613462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.625069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.625100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.638663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.638703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.649903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.649931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.729 [2024-11-26 20:49:25.661265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.729 [2024-11-26 20:49:25.661295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.672818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.672846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.684439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.684469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.695972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.696017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.707456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.707486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.719046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.719077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.731007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.731050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.742437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.742468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.753939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.753967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.765354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.765384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.776882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.776910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.788508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.788538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.800161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.800191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.812009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.812039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.825407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.825438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.835435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.835465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.848126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.848156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.859963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.859992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.870790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.870820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.882123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.882155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.895718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.895763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.906671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.906712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.987 [2024-11-26 20:49:25.918500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.987 [2024-11-26 20:49:25.918531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.930239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.930271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.942069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.942103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.954227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.954259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.967234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.967267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.979731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.979760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:25.992422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:25.992455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.005351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.005383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.017572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.017603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.030310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.030342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.043308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.043339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.055952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.055998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.068317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.068349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.081044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.081076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.093615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.093647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.106011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.106043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.118051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.118082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.130855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.130884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.143156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.143190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.155505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.155536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.167851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.167880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.246 [2024-11-26 20:49:26.180413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.246 [2024-11-26 20:49:26.180447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.192616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.192647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.204698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.204730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.216809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.216838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.229087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.229120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.241288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.241320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.253605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.253636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.265783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.265813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.278285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.278317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.290825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.290854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.303264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.303295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.315384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.315427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.327331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.327362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.339096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.339142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.351153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.351186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.362880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.362909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.375007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.375036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.387876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.387905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.399832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.399861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.412124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.412157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.424311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.424344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.505 [2024-11-26 20:49:26.436137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.505 [2024-11-26 20:49:26.436169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.763 [2024-11-26 20:49:26.448630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.763 [2024-11-26 20:49:26.448662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.763 [2024-11-26 20:49:26.460669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.763 [2024-11-26 20:49:26.460709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.763 [2024-11-26 20:49:26.473071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.763 [2024-11-26 20:49:26.473104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.763 [2024-11-26 20:49:26.485221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.763 [2024-11-26 20:49:26.485252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.763 [2024-11-26 20:49:26.497553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.763 [2024-11-26 20:49:26.497584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.509695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.509726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.522018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.522050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 10821.67 IOPS, 84.54 MiB/s [2024-11-26T19:49:26.702Z] [2024-11-26 20:49:26.534328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.534360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.546342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.546384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.558563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.558594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.570673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.570717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.583250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.583283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.595658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.595701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.607442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.607473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.619823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.619852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.632245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.632277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.644334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.644367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.656532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.656564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.668515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.668546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.680243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.680274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.764 [2024-11-26 20:49:26.693905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.764 [2024-11-26 20:49:26.693934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.704828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.704856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.716118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.716149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.727339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.727371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.738916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.738944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.750458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.750489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.763888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.763916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.774598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.774636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.786067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.786109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.799424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.799454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.810044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.810076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.821496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.821527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.832927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.832954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.844434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.844465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.856075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.856106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.869482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.869512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.880138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.880169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.891616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.891646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.903389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.903420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.915067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.915098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.926512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.926543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.938697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.938741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.022 [2024-11-26 20:49:26.952820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.022 [2024-11-26 20:49:26.952848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:26.964312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:26.964344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:26.976018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:26.976062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:26.987800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:26.987828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:26.999500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:26.999530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.011386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.011417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.023307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.023338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.034434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.034465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.047891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.047920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.058453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.058485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.071151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.071182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.082895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.082924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.094405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.094436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.108037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.108068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.119695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.119756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.131671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.131714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.143381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.143413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.155385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.155416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.169194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.169226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.180351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.180383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.192076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.192107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.203788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.203816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.281 [2024-11-26 20:49:27.217277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.281 [2024-11-26 20:49:27.217308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.228125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.228157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.239914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.239942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.251372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.251403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.263132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.263163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.274382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.274413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.285863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.285891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.297346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.297376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.308886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.308914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.320207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.320238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.331754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.331782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.345232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.345264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.355843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.355871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.367282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.367312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.379093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.379124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.390801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.390829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.401920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.401949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.413392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.413423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.424940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.424968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.436515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.436546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.449652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.449682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.460800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.460828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.539 [2024-11-26 20:49:27.472525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.539 [2024-11-26 20:49:27.472556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.484762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.484790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.496738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.496766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.508240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.508270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.519784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.519812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.533531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.533561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 10833.75 IOPS, 84.64 MiB/s [2024-11-26T19:49:27.735Z] [2024-11-26 20:49:27.544591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.544621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.555646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.555676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.569003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.569030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.579379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.579408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.590849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.590877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.602468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.602499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.613730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.613772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.625351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.625382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.636728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.636771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.648526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.648557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.659941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.659979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.673899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.673927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.685324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.685354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.696644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.696674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.707952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.707980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.719449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.719479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.797 [2024-11-26 20:49:27.731248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.797 [2024-11-26 20:49:27.731278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.742871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.742899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.756416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.756447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.767423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.767454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.778660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.778701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.791910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.791939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.801949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.801977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.814179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.814210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.825238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.825269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.837145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.837176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.848936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.848969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.860579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.860610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.872179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.872209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.884303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.884344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.895802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.895829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.907702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.907734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.919472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.919502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.931566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.931598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.943359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.943390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.955396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.955426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.967039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.967071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.978678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.978751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.055 [2024-11-26 20:49:27.990304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.055 [2024-11-26 20:49:27.990334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.002165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.002196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.015903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.015931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.027137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.027168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.038917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.038945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.050616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.050646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.061948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.061976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.072546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.072574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.082435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.082463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.092503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.092531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.103424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.103461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.115255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.115286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.127206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.127236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.138850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.138878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.152758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.152787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.163809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.163838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.175280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.175311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.187208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.187240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.198699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.198731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.313 [2024-11-26 20:49:28.210721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.313 [2024-11-26 20:49:28.210766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.314 [2024-11-26 20:49:28.223069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.314 [2024-11-26 20:49:28.223100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.314 [2024-11-26 20:49:28.234706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.314 [2024-11-26 20:49:28.234737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.314 [2024-11-26 20:49:28.246833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.314 [2024-11-26 20:49:28.246861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.258368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.258399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.269908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.269936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.281556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.281587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.293000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.293032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.304251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.304281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.316122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.316153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.328144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.328183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.339411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.339442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.351252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.351283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.571 [2024-11-26 20:49:28.362731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.571 [2024-11-26 20:49:28.362759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.376401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.376431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.386909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.386938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.398434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.398464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.409677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.409720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.421378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.421408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.432889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.432917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.446141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.446172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.456505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.456536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.468591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.468621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.480110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.480141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.494080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.494124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.572 [2024-11-26 20:49:28.504604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.572 [2024-11-26 20:49:28.504634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.516291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.516321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.527660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.527699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 10872.40 IOPS, 84.94 MiB/s [2024-11-26T19:49:28.768Z] [2024-11-26 20:49:28.540583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.540614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.548297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.548326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 00:08:37.830 Latency(us) 00:08:37.830 [2024-11-26T19:49:28.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.830 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:37.830 Nvme1n1 : 5.01 10873.93 84.95 0.00 0.00 11755.51 5267.15 21165.70 00:08:37.830 [2024-11-26T19:49:28.768Z] =================================================================================================================== 00:08:37.830 [2024-11-26T19:49:28.768Z] Total : 10873.93 84.95 0.00 0.00 11755.51 5267.15 21165.70 00:08:37.830 [2024-11-26 20:49:28.556318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.556348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.564333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.564361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.572358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.572389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.584484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.584560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.592456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.592512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.600482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.600536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.608505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.608557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.616519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.616576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.624548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.624604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.632570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.632624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.640588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.640641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.648619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.648678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.656642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.656708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.664661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.664726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.672680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.672745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.680706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.680758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.688730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.688786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.696784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.696840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.704728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.704767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.712757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.712779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.720771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.720792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.728788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.728809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.736828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.736871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.744866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.744917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.752909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.752978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.830 [2024-11-26 20:49:28.760851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.830 [2024-11-26 20:49:28.760872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.089 [2024-11-26 20:49:28.768877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.089 [2024-11-26 20:49:28.768899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.089 [2024-11-26 20:49:28.776893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.089 [2024-11-26 20:49:28.776915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3892341) - No such process 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3892341 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 delay0 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.089 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:38.089 [2024-11-26 20:49:28.904792] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:44.673 Initializing NVMe Controllers 00:08:44.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:44.673 Initialization complete. Launching workers. 00:08:44.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 696 00:08:44.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 983, failed to submit 33 00:08:44.673 success 822, unsuccessful 161, failed 0 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.673 rmmod nvme_tcp 00:08:44.673 rmmod nvme_fabrics 00:08:44.673 rmmod nvme_keyring 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3890999 ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3890999 ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890999' 00:08:44.673 killing process with pid 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3890999 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.673 20:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.578 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.578 00:08:46.578 real 0m28.158s 00:08:46.578 user 0m41.696s 00:08:46.578 sys 0m8.240s 00:08:46.578 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.578 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.578 ************************************ 00:08:46.578 END TEST nvmf_zcopy 00:08:46.578 ************************************ 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.838 ************************************ 00:08:46.838 START TEST nvmf_nmic 00:08:46.838 ************************************ 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:46.838 * Looking for test storage... 00:08:46.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.838 --rc genhtml_branch_coverage=1 00:08:46.838 --rc genhtml_function_coverage=1 00:08:46.838 --rc genhtml_legend=1 00:08:46.838 --rc geninfo_all_blocks=1 00:08:46.838 --rc geninfo_unexecuted_blocks=1 00:08:46.838 00:08:46.838 ' 00:08:46.838 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.838 --rc genhtml_branch_coverage=1 00:08:46.838 --rc genhtml_function_coverage=1 00:08:46.838 --rc genhtml_legend=1 00:08:46.838 --rc geninfo_all_blocks=1 00:08:46.839 --rc geninfo_unexecuted_blocks=1 00:08:46.839 00:08:46.839 ' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.839 --rc genhtml_branch_coverage=1 00:08:46.839 --rc genhtml_function_coverage=1 00:08:46.839 --rc genhtml_legend=1 00:08:46.839 --rc geninfo_all_blocks=1 00:08:46.839 --rc geninfo_unexecuted_blocks=1 00:08:46.839 00:08:46.839 ' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.839 --rc genhtml_branch_coverage=1 00:08:46.839 --rc genhtml_function_coverage=1 00:08:46.839 --rc genhtml_legend=1 00:08:46.839 --rc geninfo_all_blocks=1 00:08:46.839 --rc geninfo_unexecuted_blocks=1 00:08:46.839 00:08:46.839 ' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.839 20:49:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.742 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:08:49.001 00:08:49.001 --- 10.0.0.2 ping statistics --- 00:08:49.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.001 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:49.001 00:08:49.001 --- 10.0.0.1 ping statistics --- 00:08:49.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.001 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3895736 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3895736 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3895736 ']' 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.001 20:49:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.001 [2024-11-26 20:49:39.866088] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:49.001 [2024-11-26 20:49:39.866171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.260 [2024-11-26 20:49:39.941266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.260 [2024-11-26 20:49:40.007012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.260 [2024-11-26 20:49:40.007083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.260 [2024-11-26 20:49:40.007100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.260 [2024-11-26 20:49:40.007113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.260 [2024-11-26 20:49:40.007124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.260 [2024-11-26 20:49:40.008785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.260 [2024-11-26 20:49:40.008845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.260 [2024-11-26 20:49:40.008906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.260 [2024-11-26 20:49:40.008909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 [2024-11-26 20:49:40.176864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 Malloc0 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 [2024-11-26 20:49:40.242882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:49.519 test case1: single bdev can't be used in multiple subsystems 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 [2024-11-26 20:49:40.266675] bdev.c:8323:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:49.519 [2024-11-26 20:49:40.266713] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:49.519 [2024-11-26 20:49:40.266728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.519 request: 00:08:49.519 { 00:08:49.519 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:49.519 "namespace": { 00:08:49.519 "bdev_name": "Malloc0", 00:08:49.519 "no_auto_visible": false 00:08:49.519 }, 00:08:49.519 "method": "nvmf_subsystem_add_ns", 00:08:49.519 "req_id": 1 00:08:49.519 } 00:08:49.519 Got JSON-RPC error response 00:08:49.519 response: 00:08:49.519 { 00:08:49.519 "code": -32602, 00:08:49.519 "message": "Invalid parameters" 00:08:49.519 } 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:49.519 Adding namespace failed - expected result. 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:49.519 test case2: host connect to nvmf target in multiple paths 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 [2024-11-26 20:49:40.274835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.519 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.086 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:50.650 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.650 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:50.650 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.650 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:50.650 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:53.177 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:53.177 [global] 00:08:53.177 thread=1 00:08:53.177 invalidate=1 00:08:53.177 rw=write 00:08:53.177 time_based=1 00:08:53.177 runtime=1 00:08:53.177 ioengine=libaio 00:08:53.177 direct=1 00:08:53.177 bs=4096 00:08:53.177 iodepth=1 00:08:53.177 norandommap=0 00:08:53.177 numjobs=1 00:08:53.177 00:08:53.177 verify_dump=1 00:08:53.177 verify_backlog=512 00:08:53.177 verify_state_save=0 00:08:53.177 do_verify=1 00:08:53.177 verify=crc32c-intel 00:08:53.177 [job0] 00:08:53.177 filename=/dev/nvme0n1 00:08:53.177 Could not set queue depth (nvme0n1) 00:08:53.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.177 fio-3.35 00:08:53.177 Starting 1 thread 00:08:54.111 00:08:54.111 job0: (groupid=0, jobs=1): err= 0: pid=3896251: Tue Nov 26 20:49:44 2024 00:08:54.111 read: IOPS=1803, BW=7213KiB/s (7386kB/s)(7220KiB/1001msec) 00:08:54.111 slat (nsec): min=5613, max=47961, avg=12745.02, stdev=5823.22 00:08:54.111 clat (usec): min=238, max=422, avg=289.65, stdev=19.15 00:08:54.111 lat (usec): min=244, max=439, avg=302.39, stdev=23.10 00:08:54.111 clat percentiles (usec): 00:08:54.111 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:08:54.111 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:08:54.111 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 318], 00:08:54.111 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 375], 99.95th=[ 424], 00:08:54.111 | 99.99th=[ 424] 00:08:54.111 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:54.111 slat (nsec): min=5643, max=60090, avg=16873.55, stdev=7547.54 00:08:54.111 clat (usec): min=147, max=447, avg=196.41, stdev=28.04 00:08:54.111 lat (usec): min=156, max=489, avg=213.28, stdev=32.19 00:08:54.111 clat percentiles (usec): 00:08:54.111 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:08:54.111 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:08:54.111 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 245], 00:08:54.111 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 437], 99.95th=[ 449], 00:08:54.111 | 99.99th=[ 449] 00:08:54.111 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:08:54.111 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:54.111 lat (usec) : 250=51.44%, 500=48.56% 00:08:54.111 cpu : usr=4.80%, sys=7.50%, ctx=3854, majf=0, minf=1 00:08:54.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.111 issued rwts: total=1805,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.111 00:08:54.111 Run status group 0 (all jobs): 00:08:54.111 READ: bw=7213KiB/s (7386kB/s), 7213KiB/s-7213KiB/s (7386kB/s-7386kB/s), io=7220KiB (7393kB), run=1001-1001msec 00:08:54.111 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:08:54.111 00:08:54.111 Disk stats (read/write): 00:08:54.111 nvme0n1: ios=1586/1988, merge=0/0, ticks=443/374, in_queue=817, util=91.88% 00:08:54.111 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.370 rmmod nvme_tcp 00:08:54.370 rmmod nvme_fabrics 00:08:54.370 rmmod nvme_keyring 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3895736 ']' 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3895736 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3895736 ']' 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3895736 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895736 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895736' 00:08:54.370 killing process with pid 3895736 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3895736 00:08:54.370 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3895736 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.628 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.161 00:08:57.161 real 0m9.934s 00:08:57.161 user 0m22.635s 00:08:57.161 sys 0m2.422s 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:57.161 ************************************ 00:08:57.161 END TEST nvmf_nmic 00:08:57.161 ************************************ 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.161 ************************************ 00:08:57.161 START TEST nvmf_fio_target 00:08:57.161 ************************************ 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:57.161 * Looking for test storage... 00:08:57.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.161 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.162 --rc genhtml_branch_coverage=1 00:08:57.162 --rc genhtml_function_coverage=1 00:08:57.162 --rc genhtml_legend=1 00:08:57.162 --rc geninfo_all_blocks=1 00:08:57.162 --rc geninfo_unexecuted_blocks=1 00:08:57.162 00:08:57.162 ' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.162 --rc genhtml_branch_coverage=1 00:08:57.162 --rc genhtml_function_coverage=1 00:08:57.162 --rc genhtml_legend=1 00:08:57.162 --rc geninfo_all_blocks=1 00:08:57.162 --rc geninfo_unexecuted_blocks=1 00:08:57.162 00:08:57.162 ' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.162 --rc genhtml_branch_coverage=1 00:08:57.162 --rc genhtml_function_coverage=1 00:08:57.162 --rc genhtml_legend=1 00:08:57.162 --rc geninfo_all_blocks=1 00:08:57.162 --rc geninfo_unexecuted_blocks=1 00:08:57.162 00:08:57.162 ' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.162 --rc genhtml_branch_coverage=1 00:08:57.162 --rc genhtml_function_coverage=1 00:08:57.162 --rc genhtml_legend=1 00:08:57.162 --rc geninfo_all_blocks=1 00:08:57.162 --rc geninfo_unexecuted_blocks=1 00:08:57.162 00:08:57.162 ' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.162 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:59.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:59.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:59.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.064 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:59.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:59.065 00:08:59.065 --- 10.0.0.2 ping statistics --- 00:08:59.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.065 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:08:59.065 00:08:59.065 --- 10.0.0.1 ping statistics --- 00:08:59.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.065 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3898456 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3898456 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3898456 ']' 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.065 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.324 [2024-11-26 20:49:50.041443] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:08:59.324 [2024-11-26 20:49:50.041532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.324 [2024-11-26 20:49:50.120791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.324 [2024-11-26 20:49:50.184743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.324 [2024-11-26 20:49:50.184804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.324 [2024-11-26 20:49:50.184821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.324 [2024-11-26 20:49:50.184833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.324 [2024-11-26 20:49:50.184846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.324 [2024-11-26 20:49:50.186489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.324 [2024-11-26 20:49:50.186563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.324 [2024-11-26 20:49:50.186610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.324 [2024-11-26 20:49:50.186613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.581 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.581 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:59.581 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.581 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.581 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.582 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.582 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.839 [2024-11-26 20:49:50.614154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.839 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.097 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:00.097 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.355 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:00.355 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.613 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:00.613 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.870 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:00.870 20:49:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:01.435 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.435 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:01.435 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.693 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:01.693 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.258 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:02.258 20:49:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:02.515 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.773 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:02.773 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.030 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:03.030 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:03.287 20:49:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.545 [2024-11-26 20:49:54.237473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.545 20:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:03.804 20:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:04.090 20:49:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:04.680 20:49:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:06.577 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:06.834 [global] 00:09:06.834 thread=1 00:09:06.834 invalidate=1 00:09:06.834 rw=write 00:09:06.834 time_based=1 00:09:06.834 runtime=1 00:09:06.834 ioengine=libaio 00:09:06.834 direct=1 00:09:06.834 bs=4096 00:09:06.834 iodepth=1 00:09:06.834 norandommap=0 00:09:06.834 numjobs=1 00:09:06.834 00:09:06.834 verify_dump=1 00:09:06.834 verify_backlog=512 00:09:06.834 verify_state_save=0 00:09:06.834 do_verify=1 00:09:06.834 verify=crc32c-intel 00:09:06.834 [job0] 00:09:06.834 filename=/dev/nvme0n1 00:09:06.834 [job1] 00:09:06.834 filename=/dev/nvme0n2 00:09:06.834 [job2] 00:09:06.834 filename=/dev/nvme0n3 00:09:06.834 [job3] 00:09:06.834 filename=/dev/nvme0n4 00:09:06.834 Could not set queue depth (nvme0n1) 00:09:06.834 Could not set queue depth (nvme0n2) 00:09:06.834 Could not set queue depth (nvme0n3) 00:09:06.834 Could not set queue depth (nvme0n4) 00:09:06.834 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.834 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.834 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.834 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.834 fio-3.35 00:09:06.834 Starting 4 threads 00:09:08.207 00:09:08.207 job0: (groupid=0, jobs=1): err= 0: pid=3899547: Tue Nov 26 20:49:58 2024 00:09:08.207 read: IOPS=993, BW=3973KiB/s (4068kB/s)(4112KiB/1035msec) 00:09:08.207 slat (nsec): min=5574, max=44289, avg=12160.72, stdev=5632.77 00:09:08.207 clat (usec): min=246, max=41021, avg=627.45, stdev=3570.01 00:09:08.207 lat (usec): min=260, max=41036, avg=639.61, stdev=3571.00 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:09:08.207 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 310], 00:09:08.207 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 363], 00:09:08.207 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.207 | 99.99th=[41157] 00:09:08.207 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:09:08.207 slat (nsec): min=6952, max=66415, avg=16528.15, stdev=7428.71 00:09:08.207 clat (usec): min=152, max=490, avg=222.21, stdev=51.52 00:09:08.207 lat (usec): min=163, max=527, avg=238.74, stdev=53.43 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 186], 00:09:08.207 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 221], 00:09:08.207 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 289], 95.00th=[ 326], 00:09:08.207 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 482], 99.95th=[ 490], 00:09:08.207 | 99.99th=[ 490] 00:09:08.207 bw ( KiB/s): min= 4096, max= 8192, per=44.53%, avg=6144.00, stdev=2896.31, samples=2 00:09:08.207 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:08.207 lat (usec) : 250=47.46%, 500=51.72%, 750=0.51% 00:09:08.207 lat (msec) : 50=0.31% 00:09:08.207 cpu : usr=2.42%, sys=5.22%, ctx=2565, majf=0, minf=2 00:09:08.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 issued rwts: total=1028,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.207 job1: (groupid=0, jobs=1): err= 0: pid=3899548: Tue Nov 26 20:49:58 2024 00:09:08.207 read: IOPS=201, BW=805KiB/s (824kB/s)(836KiB/1039msec) 00:09:08.207 slat (nsec): min=5971, max=35654, avg=9411.54, stdev=6123.77 00:09:08.207 clat (usec): min=232, max=41351, avg=4364.71, stdev=12260.65 00:09:08.207 lat (usec): min=240, max=41358, avg=4374.13, stdev=12265.16 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:09:08.207 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 285], 00:09:08.207 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[40633], 95.00th=[41157], 00:09:08.207 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.207 | 99.99th=[41157] 00:09:08.207 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:09:08.207 slat (nsec): min=7939, max=51985, avg=14548.73, stdev=7150.72 00:09:08.207 clat (usec): min=162, max=483, avg=223.17, stdev=42.58 00:09:08.207 lat (usec): min=172, max=505, avg=237.72, stdev=45.18 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 194], 00:09:08.207 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 223], 00:09:08.207 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 318], 00:09:08.207 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 486], 99.95th=[ 486], 00:09:08.207 | 99.99th=[ 486] 00:09:08.207 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.207 lat (usec) : 250=62.97%, 500=34.12% 00:09:08.207 lat (msec) : 50=2.91% 00:09:08.207 cpu : usr=0.29%, sys=1.45%, ctx=722, majf=0, minf=1 00:09:08.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 issued rwts: total=209,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.207 job2: (groupid=0, jobs=1): err= 0: pid=3899549: Tue Nov 26 20:49:58 2024 00:09:08.207 read: IOPS=948, BW=3792KiB/s (3883kB/s)(3800KiB/1002msec) 00:09:08.207 slat (nsec): min=5781, max=70187, avg=15859.64, stdev=11811.63 00:09:08.207 clat (usec): min=258, max=41597, avg=758.46, stdev=4145.41 00:09:08.207 lat (usec): min=264, max=41630, avg=774.32, stdev=4146.74 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:09:08.207 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 322], 00:09:08.207 | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 474], 00:09:08.207 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:08.207 | 99.99th=[41681] 00:09:08.207 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:09:08.207 slat (nsec): min=7147, max=61922, avg=16604.51, stdev=7469.21 00:09:08.207 clat (usec): min=160, max=455, avg=234.22, stdev=51.82 00:09:08.207 lat (usec): min=168, max=493, avg=250.82, stdev=52.85 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:09:08.207 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 231], 00:09:08.207 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 347], 00:09:08.207 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 453], 99.95th=[ 457], 00:09:08.207 | 99.99th=[ 457] 00:09:08.207 bw ( KiB/s): min= 8192, max= 8192, per=59.37%, avg=8192.00, stdev= 0.00, samples=1 00:09:08.207 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:08.207 lat (usec) : 250=38.91%, 500=59.93%, 750=0.66% 00:09:08.207 lat (msec) : 50=0.51% 00:09:08.207 cpu : usr=1.80%, sys=4.20%, ctx=1974, majf=0, minf=2 00:09:08.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 issued rwts: total=950,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.207 job3: (groupid=0, jobs=1): err= 0: pid=3899551: Tue Nov 26 20:49:58 2024 00:09:08.207 read: IOPS=24, BW=99.6KiB/s (102kB/s)(100KiB/1004msec) 00:09:08.207 slat (nsec): min=8158, max=52747, avg=25769.64, stdev=12749.68 00:09:08.207 clat (usec): min=276, max=42415, avg=34761.26, stdev=15323.78 00:09:08.207 lat (usec): min=285, max=42434, avg=34787.03, stdev=15330.22 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 277], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[40633], 00:09:08.207 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.207 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:08.207 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:08.207 | 99.99th=[42206] 00:09:08.207 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:08.207 slat (nsec): min=7994, max=59820, avg=14353.54, stdev=7441.78 00:09:08.207 clat (usec): min=184, max=3738, avg=243.27, stdev=162.93 00:09:08.207 lat (usec): min=194, max=3773, avg=257.62, stdev=165.07 00:09:08.207 clat percentiles (usec): 00:09:08.207 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:09:08.207 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 229], 00:09:08.207 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 351], 00:09:08.207 | 99.00th=[ 445], 99.50th=[ 449], 99.90th=[ 3752], 99.95th=[ 3752], 00:09:08.207 | 99.99th=[ 3752] 00:09:08.207 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.207 lat (usec) : 250=66.29%, 500=29.61% 00:09:08.207 lat (msec) : 4=0.19%, 50=3.91% 00:09:08.207 cpu : usr=0.40%, sys=1.00%, ctx=538, majf=0, minf=1 00:09:08.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.207 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.207 00:09:08.207 Run status group 0 (all jobs): 00:09:08.207 READ: bw=8516KiB/s (8720kB/s), 99.6KiB/s-3973KiB/s (102kB/s-4068kB/s), io=8848KiB (9060kB), run=1002-1039msec 00:09:08.207 WRITE: bw=13.5MiB/s (14.1MB/s), 1971KiB/s-5936KiB/s (2018kB/s-6079kB/s), io=14.0MiB (14.7MB), run=1002-1039msec 00:09:08.207 00:09:08.207 Disk stats (read/write): 00:09:08.207 nvme0n1: ios=1074/1423, merge=0/0, ticks=500/307, in_queue=807, util=86.77% 00:09:08.208 nvme0n2: ios=227/512, merge=0/0, ticks=1672/113, in_queue=1785, util=98.17% 00:09:08.208 nvme0n3: ios=945/1024, merge=0/0, ticks=530/219, in_queue=749, util=89.01% 00:09:08.208 nvme0n4: ios=78/512, merge=0/0, ticks=983/124, in_queue=1107, util=98.00% 00:09:08.208 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:08.208 [global] 00:09:08.208 thread=1 00:09:08.208 invalidate=1 00:09:08.208 rw=randwrite 00:09:08.208 time_based=1 00:09:08.208 runtime=1 00:09:08.208 ioengine=libaio 00:09:08.208 direct=1 00:09:08.208 bs=4096 00:09:08.208 iodepth=1 00:09:08.208 norandommap=0 00:09:08.208 numjobs=1 00:09:08.208 00:09:08.208 verify_dump=1 00:09:08.208 verify_backlog=512 00:09:08.208 verify_state_save=0 00:09:08.208 do_verify=1 00:09:08.208 verify=crc32c-intel 00:09:08.208 [job0] 00:09:08.208 filename=/dev/nvme0n1 00:09:08.208 [job1] 00:09:08.208 filename=/dev/nvme0n2 00:09:08.208 [job2] 00:09:08.208 filename=/dev/nvme0n3 00:09:08.208 [job3] 00:09:08.208 filename=/dev/nvme0n4 00:09:08.208 Could not set queue depth (nvme0n1) 00:09:08.208 Could not set queue depth (nvme0n2) 00:09:08.208 Could not set queue depth (nvme0n3) 00:09:08.208 Could not set queue depth (nvme0n4) 00:09:08.464 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.464 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.464 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.464 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.464 fio-3.35 00:09:08.464 Starting 4 threads 00:09:09.835 00:09:09.835 job0: (groupid=0, jobs=1): err= 0: pid=3899781: Tue Nov 26 20:50:00 2024 00:09:09.835 read: IOPS=1025, BW=4103KiB/s (4202kB/s)(4128KiB/1006msec) 00:09:09.835 slat (nsec): min=5842, max=58005, avg=15837.33, stdev=7244.38 00:09:09.835 clat (usec): min=218, max=42073, avg=582.47, stdev=3426.38 00:09:09.835 lat (usec): min=228, max=42088, avg=598.31, stdev=3426.24 00:09:09.835 clat percentiles (usec): 00:09:09.835 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 265], 00:09:09.835 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:09:09.835 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 367], 00:09:09.835 | 99.00th=[ 824], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:09.835 | 99.99th=[42206] 00:09:09.835 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:09:09.835 slat (usec): min=6, max=21109, avg=27.54, stdev=538.30 00:09:09.835 clat (usec): min=154, max=1982, avg=217.29, stdev=81.55 00:09:09.835 lat (usec): min=162, max=21333, avg=244.82, stdev=544.62 00:09:09.835 clat percentiles (usec): 00:09:09.835 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:09:09.835 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 206], 00:09:09.835 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 281], 95.00th=[ 355], 00:09:09.835 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 1860], 99.95th=[ 1991], 00:09:09.835 | 99.99th=[ 1991] 00:09:09.835 bw ( KiB/s): min= 4096, max= 8192, per=25.95%, avg=6144.00, stdev=2896.31, samples=2 00:09:09.835 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:09.835 lat (usec) : 250=57.67%, 500=41.67%, 750=0.12%, 1000=0.12% 00:09:09.835 lat (msec) : 2=0.12%, 10=0.04%, 50=0.27% 00:09:09.835 cpu : usr=1.89%, sys=3.98%, ctx=2572, majf=0, minf=1 00:09:09.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.835 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.835 job1: (groupid=0, jobs=1): err= 0: pid=3899782: Tue Nov 26 20:50:00 2024 00:09:09.835 read: IOPS=23, BW=93.7KiB/s (95.9kB/s)(96.0KiB/1025msec) 00:09:09.835 slat (nsec): min=14235, max=41155, avg=25474.83, stdev=9280.81 00:09:09.835 clat (usec): min=364, max=41169, avg=37573.46, stdev=11455.17 00:09:09.835 lat (usec): min=381, max=41190, avg=37598.94, stdev=11457.46 00:09:09.835 clat percentiles (usec): 00:09:09.835 | 1.00th=[ 363], 5.00th=[ 400], 10.00th=[40633], 20.00th=[40633], 00:09:09.835 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:09.835 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.835 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.836 | 99.99th=[41157] 00:09:09.836 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:09.836 slat (nsec): min=6978, max=36421, avg=10876.75, stdev=5846.45 00:09:09.836 clat (usec): min=184, max=340, avg=224.59, stdev=15.43 00:09:09.836 lat (usec): min=201, max=371, avg=235.47, stdev=15.55 00:09:09.836 clat percentiles (usec): 00:09:09.836 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:09:09.836 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:09:09.836 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:09:09.836 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 343], 99.95th=[ 343], 00:09:09.836 | 99.99th=[ 343] 00:09:09.836 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.836 lat (usec) : 250=90.49%, 500=5.41% 00:09:09.836 lat (msec) : 50=4.10% 00:09:09.836 cpu : usr=0.49%, sys=0.68%, ctx=536, majf=0, minf=1 00:09:09.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.836 job2: (groupid=0, jobs=1): err= 0: pid=3899783: Tue Nov 26 20:50:00 2024 00:09:09.836 read: IOPS=1556, BW=6226KiB/s (6375kB/s)(6232KiB/1001msec) 00:09:09.836 slat (nsec): min=5153, max=66506, avg=15510.13, stdev=8509.51 00:09:09.836 clat (usec): min=218, max=667, avg=312.20, stdev=51.06 00:09:09.836 lat (usec): min=237, max=685, avg=327.71, stdev=54.88 00:09:09.836 clat percentiles (usec): 00:09:09.836 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 277], 00:09:09.836 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:09:09.836 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 412], 00:09:09.836 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 668], 00:09:09.836 | 99.99th=[ 668] 00:09:09.836 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:09.836 slat (nsec): min=6133, max=68542, avg=12116.42, stdev=5146.01 00:09:09.836 clat (usec): min=168, max=1396, avg=219.97, stdev=49.13 00:09:09.836 lat (usec): min=176, max=1409, avg=232.09, stdev=49.18 00:09:09.836 clat percentiles (usec): 00:09:09.836 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:09:09.836 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:09:09.836 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 262], 95.00th=[ 293], 00:09:09.836 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 529], 99.95th=[ 725], 00:09:09.836 | 99.99th=[ 1401] 00:09:09.836 bw ( KiB/s): min= 8192, max= 8192, per=34.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:09.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:09.836 lat (usec) : 250=50.44%, 500=49.06%, 750=0.47% 00:09:09.836 lat (msec) : 2=0.03% 00:09:09.836 cpu : usr=2.50%, sys=5.70%, ctx=3607, majf=0, minf=1 00:09:09.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 issued rwts: total=1558,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.836 job3: (groupid=0, jobs=1): err= 0: pid=3899784: Tue Nov 26 20:50:00 2024 00:09:09.836 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:09.836 slat (nsec): min=5455, max=70914, avg=19404.90, stdev=9912.20 00:09:09.836 clat (usec): min=229, max=580, avg=327.59, stdev=47.99 00:09:09.836 lat (usec): min=238, max=614, avg=347.00, stdev=51.32 00:09:09.836 clat percentiles (usec): 00:09:09.836 | 1.00th=[ 243], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 293], 00:09:09.836 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:09:09.836 | 70.00th=[ 343], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 404], 00:09:09.836 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 570], 99.95th=[ 578], 00:09:09.836 | 99.99th=[ 578] 00:09:09.836 write: IOPS=1969, BW=7876KiB/s (8065kB/s)(7884KiB/1001msec); 0 zone resets 00:09:09.836 slat (nsec): min=6212, max=69660, avg=14184.71, stdev=6371.06 00:09:09.836 clat (usec): min=163, max=1604, avg=214.50, stdev=53.42 00:09:09.836 lat (usec): min=173, max=1617, avg=228.69, stdev=53.49 00:09:09.836 clat percentiles (usec): 00:09:09.836 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:09:09.836 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:09:09.836 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 277], 00:09:09.836 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 1418], 99.95th=[ 1598], 00:09:09.836 | 99.99th=[ 1598] 00:09:09.836 bw ( KiB/s): min= 8192, max= 8192, per=34.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:09.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:09.836 lat (usec) : 250=52.15%, 500=47.56%, 750=0.23% 00:09:09.836 lat (msec) : 2=0.06% 00:09:09.836 cpu : usr=2.80%, sys=6.40%, ctx=3507, majf=0, minf=1 00:09:09.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.836 issued rwts: total=1536,1971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.836 00:09:09.836 Run status group 0 (all jobs): 00:09:09.836 READ: bw=15.8MiB/s (16.6MB/s), 93.7KiB/s-6226KiB/s (95.9kB/s-6375kB/s), io=16.2MiB (17.0MB), run=1001-1025msec 00:09:09.836 WRITE: bw=23.1MiB/s (24.2MB/s), 1998KiB/s-8184KiB/s (2046kB/s-8380kB/s), io=23.7MiB (24.8MB), run=1001-1025msec 00:09:09.836 00:09:09.836 Disk stats (read/write): 00:09:09.836 nvme0n1: ios=1069/1536, merge=0/0, ticks=1082/328, in_queue=1410, util=99.30% 00:09:09.836 nvme0n2: ios=32/512, merge=0/0, ticks=710/111, in_queue=821, util=87.01% 00:09:09.836 nvme0n3: ios=1472/1536, merge=0/0, ticks=440/332, in_queue=772, util=89.06% 00:09:09.836 nvme0n4: ios=1396/1536, merge=0/0, ticks=449/330, in_queue=779, util=89.72% 00:09:09.836 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:09.836 [global] 00:09:09.836 thread=1 00:09:09.836 invalidate=1 00:09:09.836 rw=write 00:09:09.836 time_based=1 00:09:09.836 runtime=1 00:09:09.836 ioengine=libaio 00:09:09.836 direct=1 00:09:09.836 bs=4096 00:09:09.836 iodepth=128 00:09:09.836 norandommap=0 00:09:09.836 numjobs=1 00:09:09.836 00:09:09.836 verify_dump=1 00:09:09.836 verify_backlog=512 00:09:09.836 verify_state_save=0 00:09:09.836 do_verify=1 00:09:09.836 verify=crc32c-intel 00:09:09.836 [job0] 00:09:09.836 filename=/dev/nvme0n1 00:09:09.836 [job1] 00:09:09.836 filename=/dev/nvme0n2 00:09:09.836 [job2] 00:09:09.836 filename=/dev/nvme0n3 00:09:09.836 [job3] 00:09:09.836 filename=/dev/nvme0n4 00:09:09.836 Could not set queue depth (nvme0n1) 00:09:09.836 Could not set queue depth (nvme0n2) 00:09:09.836 Could not set queue depth (nvme0n3) 00:09:09.836 Could not set queue depth (nvme0n4) 00:09:09.836 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.836 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.836 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.836 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.836 fio-3.35 00:09:09.836 Starting 4 threads 00:09:11.210 00:09:11.210 job0: (groupid=0, jobs=1): err= 0: pid=3900014: Tue Nov 26 20:50:01 2024 00:09:11.210 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:09:11.210 slat (usec): min=3, max=23286, avg=241.26, stdev=1336.75 00:09:11.210 clat (msec): min=13, max=105, avg=27.86, stdev=12.98 00:09:11.210 lat (msec): min=13, max=105, avg=28.10, stdev=13.13 00:09:11.210 clat percentiles (msec): 00:09:11.210 | 1.00th=[ 17], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22], 00:09:11.210 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:09:11.210 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 54], 00:09:11.210 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:09:11.210 | 99.99th=[ 106] 00:09:11.210 write: IOPS=2099, BW=8397KiB/s (8599kB/s)(8456KiB/1007msec); 0 zone resets 00:09:11.210 slat (usec): min=4, max=23211, avg=227.65, stdev=1055.47 00:09:11.210 clat (usec): min=5648, max=98326, avg=33207.35, stdev=14647.53 00:09:11.210 lat (usec): min=7074, max=98345, avg=33435.00, stdev=14723.71 00:09:11.210 clat percentiles (usec): 00:09:11.210 | 1.00th=[ 8717], 5.00th=[11600], 10.00th=[13173], 20.00th=[23200], 00:09:11.210 | 30.00th=[25297], 40.00th=[29754], 50.00th=[31851], 60.00th=[33817], 00:09:11.210 | 70.00th=[37487], 80.00th=[42206], 90.00th=[51119], 95.00th=[55313], 00:09:11.210 | 99.00th=[83362], 99.50th=[83362], 99.90th=[98042], 99.95th=[98042], 00:09:11.210 | 99.99th=[98042] 00:09:11.210 bw ( KiB/s): min= 8192, max= 8192, per=13.25%, avg=8192.00, stdev= 0.00, samples=2 00:09:11.210 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:11.210 lat (msec) : 10=1.30%, 20=8.24%, 50=82.20%, 100=8.10%, 250=0.17% 00:09:11.210 cpu : usr=4.08%, sys=4.17%, ctx=304, majf=0, minf=2 00:09:11.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:09:11.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.210 issued rwts: total=2048,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.210 job1: (groupid=0, jobs=1): err= 0: pid=3900015: Tue Nov 26 20:50:01 2024 00:09:11.210 read: IOPS=5266, BW=20.6MiB/s (21.6MB/s)(21.5MiB/1044msec) 00:09:11.210 slat (usec): min=2, max=10191, avg=90.73, stdev=617.07 00:09:11.210 clat (usec): min=3883, max=54020, avg=12854.66, stdev=6579.95 00:09:11.210 lat (usec): min=3922, max=58203, avg=12945.39, stdev=6600.04 00:09:11.210 clat percentiles (usec): 00:09:11.210 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10290], 00:09:11.210 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:11.210 | 70.00th=[12387], 80.00th=[13960], 90.00th=[16319], 95.00th=[19268], 00:09:11.210 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:09:11.210 | 99.99th=[54264] 00:09:11.210 write: IOPS=5394, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1044msec); 0 zone resets 00:09:11.210 slat (usec): min=3, max=9018, avg=76.89, stdev=449.48 00:09:11.210 clat (usec): min=2601, max=21591, avg=10916.34, stdev=1981.56 00:09:11.210 lat (usec): min=2609, max=21599, avg=10993.22, stdev=2024.50 00:09:11.210 clat percentiles (usec): 00:09:11.210 | 1.00th=[ 4686], 5.00th=[ 6718], 10.00th=[ 8225], 20.00th=[10028], 00:09:11.210 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:09:11.210 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:09:11.210 | 99.00th=[15795], 99.50th=[18220], 99.90th=[21365], 99.95th=[21365], 00:09:11.210 | 99.99th=[21627] 00:09:11.210 bw ( KiB/s): min=20776, max=24280, per=36.44%, avg=22528.00, stdev=2477.70, samples=2 00:09:11.210 iops : min= 5194, max= 6070, avg=5632.00, stdev=619.43, samples=2 00:09:11.210 lat (msec) : 4=0.22%, 10=17.30%, 20=80.42%, 50=1.19%, 100=0.86% 00:09:11.210 cpu : usr=8.05%, sys=10.64%, ctx=512, majf=0, minf=1 00:09:11.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:11.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.210 issued rwts: total=5498,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.210 job2: (groupid=0, jobs=1): err= 0: pid=3900017: Tue Nov 26 20:50:01 2024 00:09:11.210 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:11.210 slat (usec): min=3, max=14706, avg=102.49, stdev=603.46 00:09:11.210 clat (usec): min=5022, max=29452, avg=13686.50, stdev=1899.07 00:09:11.210 lat (usec): min=5038, max=33834, avg=13788.99, stdev=1953.86 00:09:11.210 clat percentiles (usec): 00:09:11.210 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11863], 20.00th=[12780], 00:09:11.210 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:09:11.210 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15270], 95.00th=[16909], 00:09:11.210 | 99.00th=[18482], 99.50th=[22414], 99.90th=[25560], 99.95th=[25560], 00:09:11.210 | 99.99th=[29492] 00:09:11.210 write: IOPS=4998, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1004msec); 0 zone resets 00:09:11.210 slat (usec): min=3, max=10573, avg=93.95, stdev=473.97 00:09:11.210 clat (usec): min=1435, max=22719, avg=12708.06, stdev=1835.27 00:09:11.211 lat (usec): min=1442, max=22767, avg=12802.01, stdev=1862.87 00:09:11.211 clat percentiles (usec): 00:09:11.211 | 1.00th=[ 4293], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[12125], 00:09:11.211 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:09:11.211 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[15270], 00:09:11.211 | 99.00th=[16909], 99.50th=[18744], 99.90th=[19006], 99.95th=[20317], 00:09:11.211 | 99.99th=[22676] 00:09:11.211 bw ( KiB/s): min=18648, max=20521, per=31.68%, avg=19584.50, stdev=1324.41, samples=2 00:09:11.211 iops : min= 4662, max= 5130, avg=4896.00, stdev=330.93, samples=2 00:09:11.211 lat (msec) : 2=0.06%, 4=0.11%, 10=3.54%, 20=95.78%, 50=0.50% 00:09:11.211 cpu : usr=6.88%, sys=11.76%, ctx=500, majf=0, minf=1 00:09:11.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:11.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.211 issued rwts: total=4608,5018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.211 job3: (groupid=0, jobs=1): err= 0: pid=3900018: Tue Nov 26 20:50:01 2024 00:09:11.211 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:11.211 slat (usec): min=3, max=9206, avg=144.91, stdev=793.08 00:09:11.211 clat (usec): min=8483, max=30654, avg=19050.31, stdev=3587.93 00:09:11.211 lat (usec): min=8495, max=30670, avg=19195.22, stdev=3660.86 00:09:11.211 clat percentiles (usec): 00:09:11.211 | 1.00th=[ 8848], 5.00th=[13173], 10.00th=[13698], 20.00th=[16319], 00:09:11.211 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19268], 60.00th=[20579], 00:09:11.211 | 70.00th=[21103], 80.00th=[21890], 90.00th=[23200], 95.00th=[23987], 00:09:11.211 | 99.00th=[27132], 99.50th=[27132], 99.90th=[28705], 99.95th=[29754], 00:09:11.211 | 99.99th=[30540] 00:09:11.211 write: IOPS=3346, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1007msec); 0 zone resets 00:09:11.211 slat (usec): min=3, max=42911, avg=152.92, stdev=1088.15 00:09:11.211 clat (usec): min=5659, max=50418, avg=17921.37, stdev=5024.59 00:09:11.211 lat (usec): min=6955, max=73271, avg=18074.30, stdev=5170.33 00:09:11.211 clat percentiles (usec): 00:09:11.211 | 1.00th=[ 7898], 5.00th=[12387], 10.00th=[12649], 20.00th=[13960], 00:09:11.211 | 30.00th=[14877], 40.00th=[15664], 50.00th=[15926], 60.00th=[17957], 00:09:11.211 | 70.00th=[19268], 80.00th=[23200], 90.00th=[26346], 95.00th=[27395], 00:09:11.211 | 99.00th=[28967], 99.50th=[30540], 99.90th=[33424], 99.95th=[33424], 00:09:11.211 | 99.99th=[50594] 00:09:11.211 bw ( KiB/s): min=12464, max=13480, per=20.98%, avg=12972.00, stdev=718.42, samples=2 00:09:11.211 iops : min= 3116, max= 3370, avg=3243.00, stdev=179.61, samples=2 00:09:11.211 lat (msec) : 10=2.33%, 20=61.70%, 50=35.95%, 100=0.02% 00:09:11.211 cpu : usr=5.37%, sys=7.65%, ctx=275, majf=0, minf=1 00:09:11.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:11.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.211 issued rwts: total=3072,3370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.211 00:09:11.211 Run status group 0 (all jobs): 00:09:11.211 READ: bw=57.0MiB/s (59.7MB/s), 8135KiB/s-20.6MiB/s (8330kB/s-21.6MB/s), io=59.5MiB (62.4MB), run=1004-1044msec 00:09:11.211 WRITE: bw=60.4MiB/s (63.3MB/s), 8397KiB/s-21.1MiB/s (8599kB/s-22.1MB/s), io=63.0MiB (66.1MB), run=1004-1044msec 00:09:11.211 00:09:11.211 Disk stats (read/write): 00:09:11.211 nvme0n1: ios=1586/1839, merge=0/0, ticks=16334/18626, in_queue=34960, util=87.37% 00:09:11.211 nvme0n2: ios=4658/4829, merge=0/0, ticks=46034/41994, in_queue=88028, util=90.86% 00:09:11.211 nvme0n3: ios=4114/4096, merge=0/0, ticks=27014/22022, in_queue=49036, util=100.00% 00:09:11.211 nvme0n4: ios=2589/3012, merge=0/0, ticks=20630/18657, in_queue=39287, util=100.00% 00:09:11.211 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:11.211 [global] 00:09:11.211 thread=1 00:09:11.211 invalidate=1 00:09:11.211 rw=randwrite 00:09:11.211 time_based=1 00:09:11.211 runtime=1 00:09:11.211 ioengine=libaio 00:09:11.211 direct=1 00:09:11.211 bs=4096 00:09:11.211 iodepth=128 00:09:11.211 norandommap=0 00:09:11.211 numjobs=1 00:09:11.211 00:09:11.211 verify_dump=1 00:09:11.211 verify_backlog=512 00:09:11.211 verify_state_save=0 00:09:11.211 do_verify=1 00:09:11.211 verify=crc32c-intel 00:09:11.211 [job0] 00:09:11.211 filename=/dev/nvme0n1 00:09:11.211 [job1] 00:09:11.211 filename=/dev/nvme0n2 00:09:11.211 [job2] 00:09:11.211 filename=/dev/nvme0n3 00:09:11.211 [job3] 00:09:11.211 filename=/dev/nvme0n4 00:09:11.211 Could not set queue depth (nvme0n1) 00:09:11.211 Could not set queue depth (nvme0n2) 00:09:11.211 Could not set queue depth (nvme0n3) 00:09:11.211 Could not set queue depth (nvme0n4) 00:09:11.469 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.469 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.469 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.469 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.469 fio-3.35 00:09:11.469 Starting 4 threads 00:09:12.842 00:09:12.842 job0: (groupid=0, jobs=1): err= 0: pid=3900260: Tue Nov 26 20:50:03 2024 00:09:12.842 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:12.842 slat (usec): min=2, max=3608, avg=88.99, stdev=397.15 00:09:12.842 clat (usec): min=9087, max=16410, avg=12200.39, stdev=1222.34 00:09:12.842 lat (usec): min=9249, max=16418, avg=12289.39, stdev=1179.64 00:09:12.842 clat percentiles (usec): 00:09:12.842 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10814], 20.00th=[11469], 00:09:12.842 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:09:12.842 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13960], 95.00th=[14615], 00:09:12.842 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16450], 99.95th=[16450], 00:09:12.842 | 99.99th=[16450] 00:09:12.842 write: IOPS=5248, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1003msec); 0 zone resets 00:09:12.842 slat (usec): min=3, max=24106, avg=92.02, stdev=533.95 00:09:12.842 clat (usec): min=2516, max=34290, avg=12061.28, stdev=3065.19 00:09:12.842 lat (usec): min=3365, max=34301, avg=12153.30, stdev=3062.33 00:09:12.842 clat percentiles (usec): 00:09:12.842 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:09:12.842 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:09:12.842 | 70.00th=[12256], 80.00th=[12780], 90.00th=[14353], 95.00th=[15008], 00:09:12.842 | 99.00th=[31589], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:09:12.842 | 99.99th=[34341] 00:09:12.842 bw ( KiB/s): min=20208, max=20888, per=30.28%, avg=20548.00, stdev=480.83, samples=2 00:09:12.842 iops : min= 5052, max= 5222, avg=5137.00, stdev=120.21, samples=2 00:09:12.842 lat (msec) : 4=0.09%, 10=7.90%, 20=91.21%, 50=0.81% 00:09:12.842 cpu : usr=8.68%, sys=9.88%, ctx=493, majf=0, minf=1 00:09:12.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.842 issued rwts: total=5120,5264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.842 job1: (groupid=0, jobs=1): err= 0: pid=3900273: Tue Nov 26 20:50:03 2024 00:09:12.842 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1006msec) 00:09:12.842 slat (usec): min=2, max=13014, avg=100.90, stdev=730.84 00:09:12.842 clat (usec): min=2617, max=35375, avg=13062.32, stdev=4619.92 00:09:12.842 lat (usec): min=4652, max=35387, avg=13163.22, stdev=4669.08 00:09:12.842 clat percentiles (usec): 00:09:12.842 | 1.00th=[ 6718], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9896], 00:09:12.842 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11731], 60.00th=[12125], 00:09:12.842 | 70.00th=[13566], 80.00th=[16188], 90.00th=[18482], 95.00th=[22676], 00:09:12.842 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:09:12.842 | 99.99th=[35390] 00:09:12.842 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:09:12.842 slat (usec): min=3, max=16330, avg=84.60, stdev=550.07 00:09:12.842 clat (usec): min=1358, max=37900, avg=11936.81, stdev=4330.95 00:09:12.842 lat (usec): min=1373, max=37917, avg=12021.41, stdev=4376.04 00:09:12.842 clat percentiles (usec): 00:09:12.842 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 8455], 20.00th=[10028], 00:09:12.842 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:12.842 | 70.00th=[11731], 80.00th=[12387], 90.00th=[18220], 95.00th=[22152], 00:09:12.842 | 99.00th=[27657], 99.50th=[28181], 99.90th=[32375], 99.95th=[32375], 00:09:12.842 | 99.99th=[38011] 00:09:12.842 bw ( KiB/s): min=18768, max=22236, per=30.21%, avg=20502.00, stdev=2452.25, samples=2 00:09:12.842 iops : min= 4692, max= 5559, avg=5125.50, stdev=613.06, samples=2 00:09:12.842 lat (msec) : 2=0.05%, 4=0.15%, 10=19.23%, 20=72.58%, 50=8.00% 00:09:12.842 cpu : usr=6.37%, sys=10.85%, ctx=538, majf=0, minf=2 00:09:12.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.842 issued rwts: total=5094,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.842 job2: (groupid=0, jobs=1): err= 0: pid=3900311: Tue Nov 26 20:50:03 2024 00:09:12.842 read: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1004msec) 00:09:12.842 slat (usec): min=2, max=12090, avg=117.45, stdev=730.94 00:09:12.842 clat (usec): min=3116, max=36744, avg=15321.18, stdev=4185.79 00:09:12.842 lat (usec): min=3121, max=36754, avg=15438.63, stdev=4224.23 00:09:12.842 clat percentiles (usec): 00:09:12.843 | 1.00th=[ 5866], 5.00th=[10421], 10.00th=[12125], 20.00th=[12911], 00:09:12.843 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14615], 60.00th=[15139], 00:09:12.843 | 70.00th=[16057], 80.00th=[17171], 90.00th=[19530], 95.00th=[22938], 00:09:12.843 | 99.00th=[32900], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:09:12.843 | 99.99th=[36963] 00:09:12.843 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:12.843 slat (usec): min=3, max=14812, avg=120.73, stdev=732.67 00:09:12.843 clat (usec): min=4412, max=37042, avg=17034.13, stdev=6499.01 00:09:12.843 lat (usec): min=4432, max=37057, avg=17154.86, stdev=6552.68 00:09:12.843 clat percentiles (usec): 00:09:12.843 | 1.00th=[ 6063], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11994], 00:09:12.843 | 30.00th=[12911], 40.00th=[14222], 50.00th=[15401], 60.00th=[15926], 00:09:12.843 | 70.00th=[18744], 80.00th=[23725], 90.00th=[25035], 95.00th=[30278], 00:09:12.843 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:09:12.843 | 99.99th=[36963] 00:09:12.843 bw ( KiB/s): min=15152, max=17616, per=24.14%, avg=16384.00, stdev=1742.31, samples=2 00:09:12.843 iops : min= 3788, max= 4404, avg=4096.00, stdev=435.58, samples=2 00:09:12.843 lat (msec) : 4=0.28%, 10=5.15%, 20=76.86%, 50=17.71% 00:09:12.843 cpu : usr=3.59%, sys=6.28%, ctx=406, majf=0, minf=1 00:09:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.843 issued rwts: total=3746,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.843 job3: (groupid=0, jobs=1): err= 0: pid=3900317: Tue Nov 26 20:50:03 2024 00:09:12.843 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:09:12.843 slat (usec): min=2, max=22663, avg=174.73, stdev=1120.93 00:09:12.843 clat (usec): min=4273, max=72992, avg=24656.18, stdev=14333.08 00:09:12.843 lat (usec): min=4296, max=73086, avg=24830.91, stdev=14382.56 00:09:12.843 clat percentiles (usec): 00:09:12.843 | 1.00th=[ 4686], 5.00th=[11076], 10.00th=[12911], 20.00th=[14484], 00:09:12.843 | 30.00th=[15008], 40.00th=[17957], 50.00th=[20317], 60.00th=[24773], 00:09:12.843 | 70.00th=[25297], 80.00th=[27919], 90.00th=[51643], 95.00th=[60556], 00:09:12.843 | 99.00th=[65274], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:09:12.843 | 99.99th=[72877] 00:09:12.843 write: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1002msec); 0 zone resets 00:09:12.843 slat (usec): min=3, max=31939, avg=195.52, stdev=1439.52 00:09:12.843 clat (usec): min=1247, max=103212, avg=24753.20, stdev=19729.61 00:09:12.843 lat (usec): min=1897, max=103224, avg=24948.71, stdev=19828.21 00:09:12.843 clat percentiles (msec): 00:09:12.843 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:09:12.843 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 21], 00:09:12.843 | 70.00th=[ 23], 80.00th=[ 29], 90.00th=[ 56], 95.00th=[ 71], 00:09:12.843 | 99.00th=[ 104], 99.50th=[ 104], 99.90th=[ 104], 99.95th=[ 104], 00:09:12.843 | 99.99th=[ 104] 00:09:12.843 bw ( KiB/s): min= 8376, max=12120, per=15.10%, avg=10248.00, stdev=2647.41, samples=2 00:09:12.843 iops : min= 2094, max= 3030, avg=2562.00, stdev=661.85, samples=2 00:09:12.843 lat (msec) : 2=0.14%, 4=1.85%, 10=2.20%, 20=49.30%, 50=35.89% 00:09:12.843 lat (msec) : 100=9.72%, 250=0.91% 00:09:12.843 cpu : usr=3.00%, sys=4.70%, ctx=230, majf=0, minf=1 00:09:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.843 issued rwts: total=2560,2586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.843 00:09:12.843 Run status group 0 (all jobs): 00:09:12.843 READ: bw=64.1MiB/s (67.3MB/s), 9.98MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=64.5MiB (67.7MB), run=1002-1006msec 00:09:12.843 WRITE: bw=66.3MiB/s (69.5MB/s), 10.1MiB/s-20.5MiB/s (10.6MB/s-21.5MB/s), io=66.7MiB (69.9MB), run=1002-1006msec 00:09:12.843 00:09:12.843 Disk stats (read/write): 00:09:12.843 nvme0n1: ios=4404/4608, merge=0/0, ticks=12565/12508, in_queue=25073, util=86.97% 00:09:12.843 nvme0n2: ios=4137/4431, merge=0/0, ticks=51284/51544, in_queue=102828, util=89.53% 00:09:12.843 nvme0n3: ios=3090/3447, merge=0/0, ticks=36191/49234, in_queue=85425, util=97.91% 00:09:12.843 nvme0n4: ios=1792/2048, merge=0/0, ticks=21060/22893, in_queue=43953, util=89.67% 00:09:12.843 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:12.843 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3900455 00:09:12.843 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:12.843 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:12.843 [global] 00:09:12.843 thread=1 00:09:12.843 invalidate=1 00:09:12.843 rw=read 00:09:12.843 time_based=1 00:09:12.843 runtime=10 00:09:12.843 ioengine=libaio 00:09:12.843 direct=1 00:09:12.843 bs=4096 00:09:12.843 iodepth=1 00:09:12.843 norandommap=1 00:09:12.843 numjobs=1 00:09:12.843 00:09:12.843 [job0] 00:09:12.843 filename=/dev/nvme0n1 00:09:12.843 [job1] 00:09:12.843 filename=/dev/nvme0n2 00:09:12.843 [job2] 00:09:12.843 filename=/dev/nvme0n3 00:09:12.843 [job3] 00:09:12.843 filename=/dev/nvme0n4 00:09:12.843 Could not set queue depth (nvme0n1) 00:09:12.843 Could not set queue depth (nvme0n2) 00:09:12.843 Could not set queue depth (nvme0n3) 00:09:12.843 Could not set queue depth (nvme0n4) 00:09:12.843 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.843 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.843 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.843 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.843 fio-3.35 00:09:12.843 Starting 4 threads 00:09:16.119 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:16.119 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:16.119 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40820736, buflen=4096 00:09:16.119 fio: pid=3900603, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.119 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.119 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:16.119 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31244288, buflen=4096 00:09:16.119 fio: pid=3900602, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.686 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.686 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:16.686 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=348160, buflen=4096 00:09:16.686 fio: pid=3900600, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.686 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.686 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:16.686 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=405504, buflen=4096 00:09:16.686 fio: pid=3900601, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:16.944 00:09:16.944 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3900600: Tue Nov 26 20:50:07 2024 00:09:16.944 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(340KiB/3546msec) 00:09:16.944 slat (usec): min=12, max=10922, avg=274.18, stdev=1651.60 00:09:16.944 clat (usec): min=576, max=42128, avg=41153.89, stdev=4478.61 00:09:16.944 lat (usec): min=600, max=53036, avg=41430.88, stdev=4800.41 00:09:16.944 clat percentiles (usec): 00:09:16.944 | 1.00th=[ 578], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:16.944 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:16.944 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:16.944 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:16.944 | 99.99th=[42206] 00:09:16.944 bw ( KiB/s): min= 88, max= 104, per=0.52%, avg=97.33, stdev= 6.02, samples=6 00:09:16.944 iops : min= 22, max= 26, avg=24.33, stdev= 1.51, samples=6 00:09:16.944 lat (usec) : 750=1.16% 00:09:16.944 lat (msec) : 50=97.67% 00:09:16.944 cpu : usr=0.00%, sys=0.06%, ctx=91, majf=0, minf=1 00:09:16.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.944 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.944 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.944 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3900601: Tue Nov 26 20:50:07 2024 00:09:16.945 read: IOPS=26, BW=103KiB/s (106kB/s)(396KiB/3833msec) 00:09:16.945 slat (usec): min=12, max=21905, avg=368.15, stdev=2364.70 00:09:16.945 clat (usec): min=383, max=41198, avg=38098.76, stdev=10418.98 00:09:16.945 lat (usec): min=419, max=63024, avg=38470.47, stdev=10783.96 00:09:16.945 clat percentiles (usec): 00:09:16.945 | 1.00th=[ 383], 5.00th=[ 578], 10.00th=[40633], 20.00th=[41157], 00:09:16.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:16.945 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:16.945 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:16.945 | 99.99th=[41157] 00:09:16.945 bw ( KiB/s): min= 92, max= 120, per=0.56%, avg=104.57, stdev= 9.36, samples=7 00:09:16.945 iops : min= 23, max= 30, avg=26.14, stdev= 2.34, samples=7 00:09:16.945 lat (usec) : 500=3.00%, 750=4.00% 00:09:16.945 lat (msec) : 50=92.00% 00:09:16.945 cpu : usr=0.00%, sys=0.10%, ctx=104, majf=0, minf=2 00:09:16.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.945 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3900602: Tue Nov 26 20:50:07 2024 00:09:16.945 read: IOPS=2344, BW=9377KiB/s (9602kB/s)(29.8MiB/3254msec) 00:09:16.945 slat (nsec): min=4222, max=67525, avg=14907.36, stdev=8814.06 00:09:16.945 clat (usec): min=233, max=41545, avg=405.11, stdev=2084.39 00:09:16.945 lat (usec): min=238, max=41564, avg=420.02, stdev=2085.07 00:09:16.945 clat percentiles (usec): 00:09:16.945 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:09:16.945 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:09:16.945 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:09:16.945 | 99.00th=[ 449], 99.50th=[ 553], 99.90th=[41157], 99.95th=[41157], 00:09:16.945 | 99.99th=[41681] 00:09:16.945 bw ( KiB/s): min= 104, max=12976, per=48.94%, avg=9080.00, stdev=4905.89, samples=6 00:09:16.945 iops : min= 26, max= 3244, avg=2270.00, stdev=1226.47, samples=6 00:09:16.945 lat (usec) : 250=4.22%, 500=95.06%, 750=0.41%, 1000=0.01% 00:09:16.945 lat (msec) : 10=0.03%, 50=0.26% 00:09:16.945 cpu : usr=1.57%, sys=4.49%, ctx=7629, majf=0, minf=2 00:09:16.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 issued rwts: total=7629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.945 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3900603: Tue Nov 26 20:50:07 2024 00:09:16.945 read: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(38.9MiB/2959msec) 00:09:16.945 slat (nsec): min=5442, max=69087, avg=12716.41, stdev=5779.52 00:09:16.945 clat (usec): min=223, max=1343, avg=277.89, stdev=29.28 00:09:16.945 lat (usec): min=230, max=1360, avg=290.61, stdev=32.24 00:09:16.945 clat percentiles (usec): 00:09:16.945 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:09:16.945 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:09:16.945 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:09:16.945 | 99.00th=[ 334], 99.50th=[ 404], 99.90th=[ 594], 99.95th=[ 627], 00:09:16.945 | 99.99th=[ 1352] 00:09:16.945 bw ( KiB/s): min=12776, max=14608, per=72.68%, avg=13484.80, stdev=762.49, samples=5 00:09:16.945 iops : min= 3194, max= 3652, avg=3371.20, stdev=190.62, samples=5 00:09:16.945 lat (usec) : 250=11.92%, 500=87.82%, 750=0.23%, 1000=0.01% 00:09:16.945 lat (msec) : 2=0.01% 00:09:16.945 cpu : usr=2.74%, sys=6.80%, ctx=9967, majf=0, minf=1 00:09:16.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.945 issued rwts: total=9967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.945 00:09:16.945 Run status group 0 (all jobs): 00:09:16.945 READ: bw=18.1MiB/s (19.0MB/s), 95.9KiB/s-13.2MiB/s (98.2kB/s-13.8MB/s), io=69.4MiB (72.8MB), run=2959-3833msec 00:09:16.945 00:09:16.945 Disk stats (read/write): 00:09:16.945 nvme0n1: ios=98/0, merge=0/0, ticks=3590/0, in_queue=3590, util=99.49% 00:09:16.945 nvme0n2: ios=137/0, merge=0/0, ticks=4648/0, in_queue=4648, util=99.12% 00:09:16.945 nvme0n3: ios=7181/0, merge=0/0, ticks=2924/0, in_queue=2924, util=96.76% 00:09:16.945 nvme0n4: ios=9693/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.72% 00:09:17.203 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.203 20:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:17.461 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.461 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:17.719 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.719 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:17.976 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.976 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3900455 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:18.234 nvmf hotplug test: fio failed as expected 00:09:18.234 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.798 rmmod nvme_tcp 00:09:18.798 rmmod nvme_fabrics 00:09:18.798 rmmod nvme_keyring 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3898456 ']' 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3898456 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3898456 ']' 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3898456 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3898456 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3898456' 00:09:18.798 killing process with pid 3898456 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3898456 00:09:18.798 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3898456 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.058 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.963 00:09:20.963 real 0m24.326s 00:09:20.963 user 1m25.649s 00:09:20.963 sys 0m7.103s 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.963 ************************************ 00:09:20.963 END TEST nvmf_fio_target 00:09:20.963 ************************************ 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.963 20:50:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.223 ************************************ 00:09:21.223 START TEST nvmf_bdevio 00:09:21.223 ************************************ 00:09:21.223 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:21.223 * Looking for test storage... 00:09:21.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.223 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.223 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.223 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.223 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.223 --rc genhtml_branch_coverage=1 00:09:21.223 --rc genhtml_function_coverage=1 00:09:21.223 --rc genhtml_legend=1 00:09:21.223 --rc geninfo_all_blocks=1 00:09:21.223 --rc geninfo_unexecuted_blocks=1 00:09:21.223 00:09:21.223 ' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.224 --rc genhtml_branch_coverage=1 00:09:21.224 --rc genhtml_function_coverage=1 00:09:21.224 --rc genhtml_legend=1 00:09:21.224 --rc geninfo_all_blocks=1 00:09:21.224 --rc geninfo_unexecuted_blocks=1 00:09:21.224 00:09:21.224 ' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.224 --rc genhtml_branch_coverage=1 00:09:21.224 --rc genhtml_function_coverage=1 00:09:21.224 --rc genhtml_legend=1 00:09:21.224 --rc geninfo_all_blocks=1 00:09:21.224 --rc geninfo_unexecuted_blocks=1 00:09:21.224 00:09:21.224 ' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.224 --rc genhtml_branch_coverage=1 00:09:21.224 --rc genhtml_function_coverage=1 00:09:21.224 --rc genhtml_legend=1 00:09:21.224 --rc geninfo_all_blocks=1 00:09:21.224 --rc geninfo_unexecuted_blocks=1 00:09:21.224 00:09:21.224 ' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.224 20:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:23.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:23.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.125 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:23.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:23.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.126 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:23.397 00:09:23.397 --- 10.0.0.2 ping statistics --- 00:09:23.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.397 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:23.397 00:09:23.397 --- 10.0.0.1 ping statistics --- 00:09:23.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.397 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.397 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3903238 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3903238 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3903238 ']' 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.398 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.398 [2024-11-26 20:50:14.257065] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:23.398 [2024-11-26 20:50:14.257146] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.656 [2024-11-26 20:50:14.335901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.656 [2024-11-26 20:50:14.401502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.656 [2024-11-26 20:50:14.401564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.656 [2024-11-26 20:50:14.401581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.656 [2024-11-26 20:50:14.401594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.656 [2024-11-26 20:50:14.401605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.656 [2024-11-26 20:50:14.403289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.656 [2024-11-26 20:50:14.403344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.656 [2024-11-26 20:50:14.403399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.656 [2024-11-26 20:50:14.403403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.656 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.656 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:23.656 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.656 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 [2024-11-26 20:50:14.550437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 Malloc0 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.657 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:23.913 [2024-11-26 20:50:14.611942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.913 { 00:09:23.913 "params": { 00:09:23.913 "name": "Nvme$subsystem", 00:09:23.913 "trtype": "$TEST_TRANSPORT", 00:09:23.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.913 "adrfam": "ipv4", 00:09:23.913 "trsvcid": "$NVMF_PORT", 00:09:23.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.913 "hdgst": ${hdgst:-false}, 00:09:23.913 "ddgst": ${ddgst:-false} 00:09:23.913 }, 00:09:23.913 "method": "bdev_nvme_attach_controller" 00:09:23.913 } 00:09:23.913 EOF 00:09:23.913 )") 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:23.913 20:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.913 "params": { 00:09:23.913 "name": "Nvme1", 00:09:23.913 "trtype": "tcp", 00:09:23.913 "traddr": "10.0.0.2", 00:09:23.913 "adrfam": "ipv4", 00:09:23.913 "trsvcid": "4420", 00:09:23.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.913 "hdgst": false, 00:09:23.913 "ddgst": false 00:09:23.913 }, 00:09:23.913 "method": "bdev_nvme_attach_controller" 00:09:23.913 }' 00:09:23.913 [2024-11-26 20:50:14.661553] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:23.913 [2024-11-26 20:50:14.661626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903269 ] 00:09:23.913 [2024-11-26 20:50:14.735465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.913 [2024-11-26 20:50:14.798364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.913 [2024-11-26 20:50:14.798418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.913 [2024-11-26 20:50:14.798421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.170 I/O targets: 00:09:24.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:24.170 00:09:24.170 00:09:24.170 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.170 http://cunit.sourceforge.net/ 00:09:24.170 00:09:24.170 00:09:24.170 Suite: bdevio tests on: Nvme1n1 00:09:24.426 Test: blockdev write read block ...passed 00:09:24.426 Test: blockdev write zeroes read block ...passed 00:09:24.426 Test: blockdev write zeroes read no split ...passed 00:09:24.426 Test: blockdev write zeroes read split ...passed 00:09:24.426 Test: blockdev write zeroes read split partial ...passed 00:09:24.426 Test: blockdev reset ...[2024-11-26 20:50:15.226609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:24.426 [2024-11-26 20:50:15.226733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dfcb0 (9): Bad file descriptor 00:09:24.426 [2024-11-26 20:50:15.243885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:24.426 passed 00:09:24.426 Test: blockdev write read 8 blocks ...passed 00:09:24.426 Test: blockdev write read size > 128k ...passed 00:09:24.426 Test: blockdev write read invalid size ...passed 00:09:24.426 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.426 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.426 Test: blockdev write read max offset ...passed 00:09:24.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.682 Test: blockdev writev readv 8 blocks ...passed 00:09:24.682 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.682 Test: blockdev writev readv block ...passed 00:09:24.682 Test: blockdev writev readv size > 128k ...passed 00:09:24.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.682 Test: blockdev comparev and writev ...[2024-11-26 20:50:15.417630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.417669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.417717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.417746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.418123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.418151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.418187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.418216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.418600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.418628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.418664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.418701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.419089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.419129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.419166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:24.682 [2024-11-26 20:50:15.419193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:24.682 passed 00:09:24.682 Test: blockdev nvme passthru rw ...passed 00:09:24.682 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:50:15.502999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:24.682 [2024-11-26 20:50:15.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.503212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:24.682 [2024-11-26 20:50:15.503243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.503415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:24.682 [2024-11-26 20:50:15.503442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:24.682 [2024-11-26 20:50:15.503617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:24.682 [2024-11-26 20:50:15.503642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:24.682 passed 00:09:24.682 Test: blockdev nvme admin passthru ...passed 00:09:24.682 Test: blockdev copy ...passed 00:09:24.682 00:09:24.682 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.682 suites 1 1 n/a 0 0 00:09:24.682 tests 23 23 23 0 0 00:09:24.682 asserts 152 152 152 0 n/a 00:09:24.682 00:09:24.682 Elapsed time = 0.985 seconds 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.951 rmmod nvme_tcp 00:09:24.951 rmmod nvme_fabrics 00:09:24.951 rmmod nvme_keyring 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3903238 ']' 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3903238 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3903238 ']' 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3903238 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3903238 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3903238' 00:09:24.951 killing process with pid 3903238 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3903238 00:09:24.951 20:50:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3903238 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.210 20:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.743 00:09:27.743 real 0m6.249s 00:09:27.743 user 0m9.674s 00:09:27.743 sys 0m2.093s 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 ************************************ 00:09:27.743 END TEST nvmf_bdevio 00:09:27.743 ************************************ 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:27.743 00:09:27.743 real 3m55.998s 00:09:27.743 user 10m17.184s 00:09:27.743 sys 1m7.335s 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 ************************************ 00:09:27.743 END TEST nvmf_target_core 00:09:27.743 ************************************ 00:09:27.743 20:50:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:27.743 20:50:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.743 20:50:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.743 20:50:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 ************************************ 00:09:27.743 START TEST nvmf_target_extra 00:09:27.743 ************************************ 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:27.743 * Looking for test storage... 00:09:27.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.743 --rc genhtml_branch_coverage=1 00:09:27.743 --rc genhtml_function_coverage=1 00:09:27.743 --rc genhtml_legend=1 00:09:27.743 --rc geninfo_all_blocks=1 00:09:27.743 --rc geninfo_unexecuted_blocks=1 00:09:27.743 00:09:27.743 ' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.743 --rc genhtml_branch_coverage=1 00:09:27.743 --rc genhtml_function_coverage=1 00:09:27.743 --rc genhtml_legend=1 00:09:27.743 --rc geninfo_all_blocks=1 00:09:27.743 --rc geninfo_unexecuted_blocks=1 00:09:27.743 00:09:27.743 ' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.743 --rc genhtml_branch_coverage=1 00:09:27.743 --rc genhtml_function_coverage=1 00:09:27.743 --rc genhtml_legend=1 00:09:27.743 --rc geninfo_all_blocks=1 00:09:27.743 --rc geninfo_unexecuted_blocks=1 00:09:27.743 00:09:27.743 ' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.743 --rc genhtml_branch_coverage=1 00:09:27.743 --rc genhtml_function_coverage=1 00:09:27.743 --rc genhtml_legend=1 00:09:27.743 --rc geninfo_all_blocks=1 00:09:27.743 --rc geninfo_unexecuted_blocks=1 00:09:27.743 00:09:27.743 ' 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.743 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:27.744 ************************************ 00:09:27.744 START TEST nvmf_example 00:09:27.744 ************************************ 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:27.744 * Looking for test storage... 00:09:27.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.744 --rc genhtml_branch_coverage=1 00:09:27.744 --rc genhtml_function_coverage=1 00:09:27.744 --rc genhtml_legend=1 00:09:27.744 --rc geninfo_all_blocks=1 00:09:27.744 --rc geninfo_unexecuted_blocks=1 00:09:27.744 00:09:27.744 ' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.744 --rc genhtml_branch_coverage=1 00:09:27.744 --rc genhtml_function_coverage=1 00:09:27.744 --rc genhtml_legend=1 00:09:27.744 --rc geninfo_all_blocks=1 00:09:27.744 --rc geninfo_unexecuted_blocks=1 00:09:27.744 00:09:27.744 ' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.744 --rc genhtml_branch_coverage=1 00:09:27.744 --rc genhtml_function_coverage=1 00:09:27.744 --rc genhtml_legend=1 00:09:27.744 --rc geninfo_all_blocks=1 00:09:27.744 --rc geninfo_unexecuted_blocks=1 00:09:27.744 00:09:27.744 ' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.744 --rc genhtml_branch_coverage=1 00:09:27.744 --rc genhtml_function_coverage=1 00:09:27.744 --rc genhtml_legend=1 00:09:27.744 --rc geninfo_all_blocks=1 00:09:27.744 --rc geninfo_unexecuted_blocks=1 00:09:27.744 00:09:27.744 ' 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.744 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.745 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.279 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:30.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:30.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:30.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:30.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:09:30.280 00:09:30.280 --- 10.0.0.2 ping statistics --- 00:09:30.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.280 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:30.280 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:09:30.280 00:09:30.280 --- 10.0.0.1 ping statistics --- 00:09:30.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.280 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3905527 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3905527 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3905527 ']' 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.281 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:30.281 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:42.480 Initializing NVMe Controllers 00:09:42.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:42.480 Initialization complete. Launching workers. 00:09:42.480 ======================================================== 00:09:42.480 Latency(us) 00:09:42.480 Device Information : IOPS MiB/s Average min max 00:09:42.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14867.95 58.08 4304.16 704.69 15341.62 00:09:42.480 ======================================================== 00:09:42.480 Total : 14867.95 58.08 4304.16 704.69 15341.62 00:09:42.480 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.480 rmmod nvme_tcp 00:09:42.480 rmmod nvme_fabrics 00:09:42.480 rmmod nvme_keyring 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3905527 ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3905527 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3905527 ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3905527 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3905527 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3905527' 00:09:42.480 killing process with pid 3905527 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3905527 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3905527 00:09:42.480 nvmf threads initialize successfully 00:09:42.480 bdev subsystem init successfully 00:09:42.480 created a nvmf target service 00:09:42.480 create targets's poll groups done 00:09:42.480 all subsystems of target started 00:09:42.480 nvmf target is running 00:09:42.480 all subsystems of target stopped 00:09:42.480 destroy targets's poll groups done 00:09:42.480 destroyed the nvmf target service 00:09:42.480 bdev subsystem finish successfully 00:09:42.480 nvmf threads destroy successfully 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.480 20:50:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.049 00:09:43.049 real 0m15.437s 00:09:43.049 user 0m42.346s 00:09:43.049 sys 0m3.370s 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.049 ************************************ 00:09:43.049 END TEST nvmf_example 00:09:43.049 ************************************ 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:43.049 ************************************ 00:09:43.049 START TEST nvmf_filesystem 00:09:43.049 ************************************ 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:43.049 * Looking for test storage... 00:09:43.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.049 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.312 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.313 --rc genhtml_branch_coverage=1 00:09:43.313 --rc genhtml_function_coverage=1 00:09:43.313 --rc genhtml_legend=1 00:09:43.313 --rc geninfo_all_blocks=1 00:09:43.313 --rc geninfo_unexecuted_blocks=1 00:09:43.313 00:09:43.313 ' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.313 --rc genhtml_branch_coverage=1 00:09:43.313 --rc genhtml_function_coverage=1 00:09:43.313 --rc genhtml_legend=1 00:09:43.313 --rc geninfo_all_blocks=1 00:09:43.313 --rc geninfo_unexecuted_blocks=1 00:09:43.313 00:09:43.313 ' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.313 --rc genhtml_branch_coverage=1 00:09:43.313 --rc genhtml_function_coverage=1 00:09:43.313 --rc genhtml_legend=1 00:09:43.313 --rc geninfo_all_blocks=1 00:09:43.313 --rc geninfo_unexecuted_blocks=1 00:09:43.313 00:09:43.313 ' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.313 --rc genhtml_branch_coverage=1 00:09:43.313 --rc genhtml_function_coverage=1 00:09:43.313 --rc genhtml_legend=1 00:09:43.313 --rc geninfo_all_blocks=1 00:09:43.313 --rc geninfo_unexecuted_blocks=1 00:09:43.313 00:09:43.313 ' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:43.313 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:43.314 #define SPDK_CONFIG_H 00:09:43.314 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:43.314 #define SPDK_CONFIG_APPS 1 00:09:43.314 #define SPDK_CONFIG_ARCH native 00:09:43.314 #undef SPDK_CONFIG_ASAN 00:09:43.314 #undef SPDK_CONFIG_AVAHI 00:09:43.314 #undef SPDK_CONFIG_CET 00:09:43.314 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:43.314 #define SPDK_CONFIG_COVERAGE 1 00:09:43.314 #define SPDK_CONFIG_CROSS_PREFIX 00:09:43.314 #undef SPDK_CONFIG_CRYPTO 00:09:43.314 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:43.314 #undef SPDK_CONFIG_CUSTOMOCF 00:09:43.314 #undef SPDK_CONFIG_DAOS 00:09:43.314 #define SPDK_CONFIG_DAOS_DIR 00:09:43.314 #define SPDK_CONFIG_DEBUG 1 00:09:43.314 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:43.314 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:43.314 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:43.314 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:43.314 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:43.314 #undef SPDK_CONFIG_DPDK_UADK 00:09:43.314 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:43.314 #define SPDK_CONFIG_EXAMPLES 1 00:09:43.314 #undef SPDK_CONFIG_FC 00:09:43.314 #define SPDK_CONFIG_FC_PATH 00:09:43.314 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:43.314 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:43.314 #define SPDK_CONFIG_FSDEV 1 00:09:43.314 #undef SPDK_CONFIG_FUSE 00:09:43.314 #undef SPDK_CONFIG_FUZZER 00:09:43.314 #define SPDK_CONFIG_FUZZER_LIB 00:09:43.314 #undef SPDK_CONFIG_GOLANG 00:09:43.314 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:43.314 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:43.314 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:43.314 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:43.314 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:43.314 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:43.314 #undef SPDK_CONFIG_HAVE_LZ4 00:09:43.314 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:43.314 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:43.314 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:43.314 #define SPDK_CONFIG_IDXD 1 00:09:43.314 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:43.314 #undef SPDK_CONFIG_IPSEC_MB 00:09:43.314 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:43.314 #define SPDK_CONFIG_ISAL 1 00:09:43.314 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:43.314 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:43.314 #define SPDK_CONFIG_LIBDIR 00:09:43.314 #undef SPDK_CONFIG_LTO 00:09:43.314 #define SPDK_CONFIG_MAX_LCORES 128 00:09:43.314 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:43.314 #define SPDK_CONFIG_NVME_CUSE 1 00:09:43.314 #undef SPDK_CONFIG_OCF 00:09:43.314 #define SPDK_CONFIG_OCF_PATH 00:09:43.314 #define SPDK_CONFIG_OPENSSL_PATH 00:09:43.314 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:43.314 #define SPDK_CONFIG_PGO_DIR 00:09:43.314 #undef SPDK_CONFIG_PGO_USE 00:09:43.314 #define SPDK_CONFIG_PREFIX /usr/local 00:09:43.314 #undef SPDK_CONFIG_RAID5F 00:09:43.314 #undef SPDK_CONFIG_RBD 00:09:43.314 #define SPDK_CONFIG_RDMA 1 00:09:43.314 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:43.314 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:43.314 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:43.314 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:43.314 #define SPDK_CONFIG_SHARED 1 00:09:43.314 #undef SPDK_CONFIG_SMA 00:09:43.314 #define SPDK_CONFIG_TESTS 1 00:09:43.314 #undef SPDK_CONFIG_TSAN 00:09:43.314 #define SPDK_CONFIG_UBLK 1 00:09:43.314 #define SPDK_CONFIG_UBSAN 1 00:09:43.314 #undef SPDK_CONFIG_UNIT_TESTS 00:09:43.314 #undef SPDK_CONFIG_URING 00:09:43.314 #define SPDK_CONFIG_URING_PATH 00:09:43.314 #undef SPDK_CONFIG_URING_ZNS 00:09:43.314 #undef SPDK_CONFIG_USDT 00:09:43.314 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:43.314 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:43.314 #define SPDK_CONFIG_VFIO_USER 1 00:09:43.314 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:43.314 #define SPDK_CONFIG_VHOST 1 00:09:43.314 #define SPDK_CONFIG_VIRTIO 1 00:09:43.314 #undef SPDK_CONFIG_VTUNE 00:09:43.314 #define SPDK_CONFIG_VTUNE_DIR 00:09:43.314 #define SPDK_CONFIG_WERROR 1 00:09:43.314 #define SPDK_CONFIG_WPDK_DIR 00:09:43.314 #undef SPDK_CONFIG_XNVME 00:09:43.314 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.314 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:43.315 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:43.316 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3907103 ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3907103 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.DZi3qo 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DZi3qo/tests/target /tmp/spdk.DZi3qo 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55193231360 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988515840 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6795284480 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982889472 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:09:43.317 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30992941056 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1318912 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:43.318 * Looking for test storage... 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55193231360 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9009876992 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.318 --rc genhtml_branch_coverage=1 00:09:43.318 --rc genhtml_function_coverage=1 00:09:43.318 --rc genhtml_legend=1 00:09:43.318 --rc geninfo_all_blocks=1 00:09:43.318 --rc geninfo_unexecuted_blocks=1 00:09:43.318 00:09:43.318 ' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.318 --rc genhtml_branch_coverage=1 00:09:43.318 --rc genhtml_function_coverage=1 00:09:43.318 --rc genhtml_legend=1 00:09:43.318 --rc geninfo_all_blocks=1 00:09:43.318 --rc geninfo_unexecuted_blocks=1 00:09:43.318 00:09:43.318 ' 00:09:43.318 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.319 --rc genhtml_branch_coverage=1 00:09:43.319 --rc genhtml_function_coverage=1 00:09:43.319 --rc genhtml_legend=1 00:09:43.319 --rc geninfo_all_blocks=1 00:09:43.319 --rc geninfo_unexecuted_blocks=1 00:09:43.319 00:09:43.319 ' 00:09:43.319 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.319 --rc genhtml_branch_coverage=1 00:09:43.319 --rc genhtml_function_coverage=1 00:09:43.319 --rc genhtml_legend=1 00:09:43.319 --rc geninfo_all_blocks=1 00:09:43.319 --rc geninfo_unexecuted_blocks=1 00:09:43.319 00:09:43.319 ' 00:09:43.319 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.319 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.577 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.578 20:50:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.481 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.482 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.482 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.482 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.482 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.482 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:09:45.741 00:09:45.741 --- 10.0.0.2 ping statistics --- 00:09:45.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.741 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:09:45.741 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:45.741 00:09:45.741 --- 10.0.0.1 ping statistics --- 00:09:45.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.741 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.742 ************************************ 00:09:45.742 START TEST nvmf_filesystem_no_in_capsule 00:09:45.742 ************************************ 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3908856 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3908856 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3908856 ']' 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.742 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.742 [2024-11-26 20:50:36.651975] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:09:45.742 [2024-11-26 20:50:36.652095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.000 [2024-11-26 20:50:36.726985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.000 [2024-11-26 20:50:36.786078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.000 [2024-11-26 20:50:36.786136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.000 [2024-11-26 20:50:36.786165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.000 [2024-11-26 20:50:36.786176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.000 [2024-11-26 20:50:36.786186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.000 [2024-11-26 20:50:36.787741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.000 [2024-11-26 20:50:36.787800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.000 [2024-11-26 20:50:36.787867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.000 [2024-11-26 20:50:36.787870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.000 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.258 [2024-11-26 20:50:36.939224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.258 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.258 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:46.258 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.258 20:50:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.258 Malloc1 00:09:46.258 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.258 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.258 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.259 [2024-11-26 20:50:37.140412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:46.259 { 00:09:46.259 "name": "Malloc1", 00:09:46.259 "aliases": [ 00:09:46.259 "c1ec8840-8524-4420-966d-cbed9a491d34" 00:09:46.259 ], 00:09:46.259 "product_name": "Malloc disk", 00:09:46.259 "block_size": 512, 00:09:46.259 "num_blocks": 1048576, 00:09:46.259 "uuid": "c1ec8840-8524-4420-966d-cbed9a491d34", 00:09:46.259 "assigned_rate_limits": { 00:09:46.259 "rw_ios_per_sec": 0, 00:09:46.259 "rw_mbytes_per_sec": 0, 00:09:46.259 "r_mbytes_per_sec": 0, 00:09:46.259 "w_mbytes_per_sec": 0 00:09:46.259 }, 00:09:46.259 "claimed": true, 00:09:46.259 "claim_type": "exclusive_write", 00:09:46.259 "zoned": false, 00:09:46.259 "supported_io_types": { 00:09:46.259 "read": true, 00:09:46.259 "write": true, 00:09:46.259 "unmap": true, 00:09:46.259 "flush": true, 00:09:46.259 "reset": true, 00:09:46.259 "nvme_admin": false, 00:09:46.259 "nvme_io": false, 00:09:46.259 "nvme_io_md": false, 00:09:46.259 "write_zeroes": true, 00:09:46.259 "zcopy": true, 00:09:46.259 "get_zone_info": false, 00:09:46.259 "zone_management": false, 00:09:46.259 "zone_append": false, 00:09:46.259 "compare": false, 00:09:46.259 "compare_and_write": false, 00:09:46.259 "abort": true, 00:09:46.259 "seek_hole": false, 00:09:46.259 "seek_data": false, 00:09:46.259 "copy": true, 00:09:46.259 "nvme_iov_md": false 00:09:46.259 }, 00:09:46.259 "memory_domains": [ 00:09:46.259 { 00:09:46.259 "dma_device_id": "system", 00:09:46.259 "dma_device_type": 1 00:09:46.259 }, 00:09:46.259 { 00:09:46.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.259 "dma_device_type": 2 00:09:46.259 } 00:09:46.259 ], 00:09:46.259 "driver_specific": {} 00:09:46.259 } 00:09:46.259 ]' 00:09:46.259 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:46.517 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.083 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.083 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:47.083 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.083 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:47.083 20:50:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:49.128 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:49.386 20:50:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:50.319 20:50:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.252 ************************************ 00:09:51.252 START TEST filesystem_ext4 00:09:51.252 ************************************ 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:51.252 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:51.252 mke2fs 1.47.0 (5-Feb-2023) 00:09:51.252 Discarding device blocks: 0/522240 done 00:09:51.252 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:51.252 Filesystem UUID: 05598378-7e53-4940-b596-de48a3210e74 00:09:51.252 Superblock backups stored on blocks: 00:09:51.252 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:51.252 00:09:51.253 Allocating group tables: 0/64 done 00:09:51.253 Writing inode tables: 0/64 done 00:09:51.253 Creating journal (8192 blocks): done 00:09:51.510 Writing superblocks and filesystem accounting information: 0/64 done 00:09:51.510 00:09:51.510 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:51.510 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3908856 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:58.076 00:09:58.076 real 0m6.464s 00:09:58.076 user 0m0.021s 00:09:58.076 sys 0m0.061s 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:58.076 ************************************ 00:09:58.076 END TEST filesystem_ext4 00:09:58.076 ************************************ 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.076 ************************************ 00:09:58.076 START TEST filesystem_btrfs 00:09:58.076 ************************************ 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:58.076 btrfs-progs v6.8.1 00:09:58.076 See https://btrfs.readthedocs.io for more information. 00:09:58.076 00:09:58.076 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:58.076 NOTE: several default settings have changed in version 5.15, please make sure 00:09:58.076 this does not affect your deployments: 00:09:58.076 - DUP for metadata (-m dup) 00:09:58.076 - enabled no-holes (-O no-holes) 00:09:58.076 - enabled free-space-tree (-R free-space-tree) 00:09:58.076 00:09:58.076 Label: (null) 00:09:58.076 UUID: f58128f4-b41a-4b7c-a4ab-af1854bbb052 00:09:58.076 Node size: 16384 00:09:58.076 Sector size: 4096 (CPU page size: 4096) 00:09:58.076 Filesystem size: 510.00MiB 00:09:58.076 Block group profiles: 00:09:58.076 Data: single 8.00MiB 00:09:58.076 Metadata: DUP 32.00MiB 00:09:58.076 System: DUP 8.00MiB 00:09:58.076 SSD detected: yes 00:09:58.076 Zoned device: no 00:09:58.076 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:58.076 Checksum: crc32c 00:09:58.076 Number of devices: 1 00:09:58.076 Devices: 00:09:58.076 ID SIZE PATH 00:09:58.076 1 510.00MiB /dev/nvme0n1p1 00:09:58.076 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:58.076 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3908856 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.014 00:09:59.014 real 0m1.334s 00:09:59.014 user 0m0.021s 00:09:59.014 sys 0m0.103s 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 ************************************ 00:09:59.014 END TEST filesystem_btrfs 00:09:59.014 ************************************ 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 ************************************ 00:09:59.014 START TEST filesystem_xfs 00:09:59.014 ************************************ 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:59.014 20:50:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:59.014 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:59.014 = sectsz=512 attr=2, projid32bit=1 00:09:59.014 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:59.014 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:59.014 data = bsize=4096 blocks=130560, imaxpct=25 00:09:59.014 = sunit=0 swidth=0 blks 00:09:59.014 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:59.014 log =internal log bsize=4096 blocks=16384, version=2 00:09:59.014 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:59.015 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:59.974 Discarding blocks...Done. 00:09:59.974 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:59.974 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:02.502 20:50:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:02.502 00:10:02.502 real 0m3.249s 00:10:02.502 user 0m0.016s 00:10:02.502 sys 0m0.067s 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:02.502 ************************************ 00:10:02.502 END TEST filesystem_xfs 00:10:02.502 ************************************ 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3908856 ']' 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3908856' 00:10:02.502 killing process with pid 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3908856 00:10:02.502 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3908856 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:03.068 00:10:03.068 real 0m17.258s 00:10:03.068 user 1m6.731s 00:10:03.068 sys 0m2.229s 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 ************************************ 00:10:03.068 END TEST nvmf_filesystem_no_in_capsule 00:10:03.068 ************************************ 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 ************************************ 00:10:03.068 START TEST nvmf_filesystem_in_capsule 00:10:03.068 ************************************ 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3911092 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3911092 00:10:03.068 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3911092 ']' 00:10:03.069 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.069 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.069 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.069 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.069 20:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.069 [2024-11-26 20:50:53.964903] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:03.069 [2024-11-26 20:50:53.964979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.327 [2024-11-26 20:50:54.041454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.327 [2024-11-26 20:50:54.101673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.327 [2024-11-26 20:50:54.101739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.327 [2024-11-26 20:50:54.101756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.327 [2024-11-26 20:50:54.101770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.327 [2024-11-26 20:50:54.101782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.327 [2024-11-26 20:50:54.103432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.327 [2024-11-26 20:50:54.103487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.327 [2024-11-26 20:50:54.103607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.327 [2024-11-26 20:50:54.103611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.327 [2024-11-26 20:50:54.252193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.327 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 Malloc1 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.586 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.587 [2024-11-26 20:50:54.458439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:03.587 { 00:10:03.587 "name": "Malloc1", 00:10:03.587 "aliases": [ 00:10:03.587 "297e6f32-c2c0-44ce-b91c-792edfa1220e" 00:10:03.587 ], 00:10:03.587 "product_name": "Malloc disk", 00:10:03.587 "block_size": 512, 00:10:03.587 "num_blocks": 1048576, 00:10:03.587 "uuid": "297e6f32-c2c0-44ce-b91c-792edfa1220e", 00:10:03.587 "assigned_rate_limits": { 00:10:03.587 "rw_ios_per_sec": 0, 00:10:03.587 "rw_mbytes_per_sec": 0, 00:10:03.587 "r_mbytes_per_sec": 0, 00:10:03.587 "w_mbytes_per_sec": 0 00:10:03.587 }, 00:10:03.587 "claimed": true, 00:10:03.587 "claim_type": "exclusive_write", 00:10:03.587 "zoned": false, 00:10:03.587 "supported_io_types": { 00:10:03.587 "read": true, 00:10:03.587 "write": true, 00:10:03.587 "unmap": true, 00:10:03.587 "flush": true, 00:10:03.587 "reset": true, 00:10:03.587 "nvme_admin": false, 00:10:03.587 "nvme_io": false, 00:10:03.587 "nvme_io_md": false, 00:10:03.587 "write_zeroes": true, 00:10:03.587 "zcopy": true, 00:10:03.587 "get_zone_info": false, 00:10:03.587 "zone_management": false, 00:10:03.587 "zone_append": false, 00:10:03.587 "compare": false, 00:10:03.587 "compare_and_write": false, 00:10:03.587 "abort": true, 00:10:03.587 "seek_hole": false, 00:10:03.587 "seek_data": false, 00:10:03.587 "copy": true, 00:10:03.587 "nvme_iov_md": false 00:10:03.587 }, 00:10:03.587 "memory_domains": [ 00:10:03.587 { 00:10:03.587 "dma_device_id": "system", 00:10:03.587 "dma_device_type": 1 00:10:03.587 }, 00:10:03.587 { 00:10:03.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.587 "dma_device_type": 2 00:10:03.587 } 00:10:03.587 ], 00:10:03.587 "driver_specific": {} 00:10:03.587 } 00:10:03.587 ]' 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:03.587 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:03.845 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:03.845 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:03.845 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:03.845 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:03.845 20:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.410 20:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.410 20:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:04.410 20:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.410 20:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:04.410 20:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:06.311 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:06.571 20:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:07.136 20:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.507 ************************************ 00:10:08.507 START TEST filesystem_in_capsule_ext4 00:10:08.507 ************************************ 00:10:08.507 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:08.508 20:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:08.508 mke2fs 1.47.0 (5-Feb-2023) 00:10:08.508 Discarding device blocks: 0/522240 done 00:10:08.508 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:08.508 Filesystem UUID: 15d54850-caab-476a-b643-25b627445ac0 00:10:08.508 Superblock backups stored on blocks: 00:10:08.508 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:08.508 00:10:08.508 Allocating group tables: 0/64 done 00:10:08.508 Writing inode tables: 0/64 done 00:10:09.441 Creating journal (8192 blocks): done 00:10:09.441 Writing superblocks and filesystem accounting information: 0/64 done 00:10:09.441 00:10:09.441 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:09.441 20:51:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3911092 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.993 00:10:15.993 real 0m7.367s 00:10:15.993 user 0m0.011s 00:10:15.993 sys 0m0.062s 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:15.993 ************************************ 00:10:15.993 END TEST filesystem_in_capsule_ext4 00:10:15.993 ************************************ 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.993 ************************************ 00:10:15.993 START TEST filesystem_in_capsule_btrfs 00:10:15.993 ************************************ 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:15.993 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:15.993 btrfs-progs v6.8.1 00:10:15.993 See https://btrfs.readthedocs.io for more information. 00:10:15.993 00:10:15.993 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:15.993 NOTE: several default settings have changed in version 5.15, please make sure 00:10:15.993 this does not affect your deployments: 00:10:15.993 - DUP for metadata (-m dup) 00:10:15.993 - enabled no-holes (-O no-holes) 00:10:15.994 - enabled free-space-tree (-R free-space-tree) 00:10:15.994 00:10:15.994 Label: (null) 00:10:15.994 UUID: 6aeddefb-58f1-47c9-a7e7-ccfe472b1d5b 00:10:15.994 Node size: 16384 00:10:15.994 Sector size: 4096 (CPU page size: 4096) 00:10:15.994 Filesystem size: 510.00MiB 00:10:15.994 Block group profiles: 00:10:15.994 Data: single 8.00MiB 00:10:15.994 Metadata: DUP 32.00MiB 00:10:15.994 System: DUP 8.00MiB 00:10:15.994 SSD detected: yes 00:10:15.994 Zoned device: no 00:10:15.994 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:15.994 Checksum: crc32c 00:10:15.994 Number of devices: 1 00:10:15.994 Devices: 00:10:15.994 ID SIZE PATH 00:10:15.994 1 510.00MiB /dev/nvme0n1p1 00:10:15.994 00:10:15.994 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:15.994 20:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.251 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.509 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3911092 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:16.510 00:10:16.510 real 0m0.797s 00:10:16.510 user 0m0.014s 00:10:16.510 sys 0m0.104s 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:16.510 ************************************ 00:10:16.510 END TEST filesystem_in_capsule_btrfs 00:10:16.510 ************************************ 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.510 ************************************ 00:10:16.510 START TEST filesystem_in_capsule_xfs 00:10:16.510 ************************************ 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:16.510 20:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:16.510 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:16.510 = sectsz=512 attr=2, projid32bit=1 00:10:16.510 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:16.510 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:16.510 data = bsize=4096 blocks=130560, imaxpct=25 00:10:16.510 = sunit=0 swidth=0 blks 00:10:16.510 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:16.510 log =internal log bsize=4096 blocks=16384, version=2 00:10:16.510 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:16.510 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:17.443 Discarding blocks...Done. 00:10:17.444 20:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.444 20:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3911092 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.973 00:10:19.973 real 0m3.334s 00:10:19.973 user 0m0.015s 00:10:19.973 sys 0m0.061s 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.973 ************************************ 00:10:19.973 END TEST filesystem_in_capsule_xfs 00:10:19.973 ************************************ 00:10:19.973 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:20.232 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:20.232 20:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3911092 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3911092 ']' 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3911092 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911092 00:10:20.232 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.233 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.233 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911092' 00:10:20.233 killing process with pid 3911092 00:10:20.233 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3911092 00:10:20.233 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3911092 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:20.799 00:10:20.799 real 0m17.638s 00:10:20.799 user 1m8.280s 00:10:20.799 sys 0m2.109s 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.799 ************************************ 00:10:20.799 END TEST nvmf_filesystem_in_capsule 00:10:20.799 ************************************ 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.799 rmmod nvme_tcp 00:10:20.799 rmmod nvme_fabrics 00:10:20.799 rmmod nvme_keyring 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.799 20:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.332 00:10:23.332 real 0m39.786s 00:10:23.332 user 2m16.070s 00:10:23.332 sys 0m6.182s 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 ************************************ 00:10:23.332 END TEST nvmf_filesystem 00:10:23.332 ************************************ 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 ************************************ 00:10:23.332 START TEST nvmf_target_discovery 00:10:23.332 ************************************ 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.332 * Looking for test storage... 00:10:23.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.332 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.332 --rc genhtml_branch_coverage=1 00:10:23.332 --rc genhtml_function_coverage=1 00:10:23.332 --rc genhtml_legend=1 00:10:23.332 --rc geninfo_all_blocks=1 00:10:23.332 --rc geninfo_unexecuted_blocks=1 00:10:23.332 00:10:23.333 ' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.333 --rc genhtml_branch_coverage=1 00:10:23.333 --rc genhtml_function_coverage=1 00:10:23.333 --rc genhtml_legend=1 00:10:23.333 --rc geninfo_all_blocks=1 00:10:23.333 --rc geninfo_unexecuted_blocks=1 00:10:23.333 00:10:23.333 ' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.333 --rc genhtml_branch_coverage=1 00:10:23.333 --rc genhtml_function_coverage=1 00:10:23.333 --rc genhtml_legend=1 00:10:23.333 --rc geninfo_all_blocks=1 00:10:23.333 --rc geninfo_unexecuted_blocks=1 00:10:23.333 00:10:23.333 ' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.333 --rc genhtml_branch_coverage=1 00:10:23.333 --rc genhtml_function_coverage=1 00:10:23.333 --rc genhtml_legend=1 00:10:23.333 --rc geninfo_all_blocks=1 00:10:23.333 --rc geninfo_unexecuted_blocks=1 00:10:23.333 00:10:23.333 ' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.333 20:51:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:25.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:25.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:25.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:25.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.238 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:10:25.497 00:10:25.497 --- 10.0.0.2 ping statistics --- 00:10:25.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.497 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:25.497 00:10:25.497 --- 10.0.0.1 ping statistics --- 00:10:25.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.497 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3915883 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3915883 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3915883 ']' 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.497 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.497 [2024-11-26 20:51:16.321269] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:25.497 [2024-11-26 20:51:16.321345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.497 [2024-11-26 20:51:16.401764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.756 [2024-11-26 20:51:16.467668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.756 [2024-11-26 20:51:16.467736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.756 [2024-11-26 20:51:16.467753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.756 [2024-11-26 20:51:16.467767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.756 [2024-11-26 20:51:16.467778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.756 [2024-11-26 20:51:16.469471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.756 [2024-11-26 20:51:16.469526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.756 [2024-11-26 20:51:16.469577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.756 [2024-11-26 20:51:16.469582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.756 [2024-11-26 20:51:16.626045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.756 Null1 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.756 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.757 [2024-11-26 20:51:16.677901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:25.757 Null2 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.757 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 Null3 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 Null4 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.016 20:51:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:26.274 00:10:26.274 Discovery Log Number of Records 6, Generation counter 6 00:10:26.274 =====Discovery Log Entry 0====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: current discovery subsystem 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4420 00:10:26.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: explicit discovery connections, duplicate discovery information 00:10:26.274 sectype: none 00:10:26.274 =====Discovery Log Entry 1====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: nvme subsystem 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4420 00:10:26.274 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: none 00:10:26.274 sectype: none 00:10:26.274 =====Discovery Log Entry 2====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: nvme subsystem 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4420 00:10:26.274 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: none 00:10:26.274 sectype: none 00:10:26.274 =====Discovery Log Entry 3====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: nvme subsystem 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4420 00:10:26.274 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: none 00:10:26.274 sectype: none 00:10:26.274 =====Discovery Log Entry 4====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: nvme subsystem 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4420 00:10:26.274 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: none 00:10:26.274 sectype: none 00:10:26.274 =====Discovery Log Entry 5====== 00:10:26.274 trtype: tcp 00:10:26.274 adrfam: ipv4 00:10:26.274 subtype: discovery subsystem referral 00:10:26.274 treq: not required 00:10:26.274 portid: 0 00:10:26.274 trsvcid: 4430 00:10:26.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:26.274 traddr: 10.0.0.2 00:10:26.274 eflags: none 00:10:26.274 sectype: none 00:10:26.274 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:26.274 Perform nvmf subsystem discovery via RPC 00:10:26.274 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:26.274 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.274 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.274 [ 00:10:26.274 { 00:10:26.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:26.274 "subtype": "Discovery", 00:10:26.274 "listen_addresses": [ 00:10:26.274 { 00:10:26.274 "trtype": "TCP", 00:10:26.274 "adrfam": "IPv4", 00:10:26.274 "traddr": "10.0.0.2", 00:10:26.274 "trsvcid": "4420" 00:10:26.274 } 00:10:26.274 ], 00:10:26.274 "allow_any_host": true, 00:10:26.274 "hosts": [] 00:10:26.274 }, 00:10:26.274 { 00:10:26.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.274 "subtype": "NVMe", 00:10:26.274 "listen_addresses": [ 00:10:26.274 { 00:10:26.274 "trtype": "TCP", 00:10:26.274 "adrfam": "IPv4", 00:10:26.274 "traddr": "10.0.0.2", 00:10:26.274 "trsvcid": "4420" 00:10:26.274 } 00:10:26.274 ], 00:10:26.274 "allow_any_host": true, 00:10:26.274 "hosts": [], 00:10:26.274 "serial_number": "SPDK00000000000001", 00:10:26.274 "model_number": "SPDK bdev Controller", 00:10:26.274 "max_namespaces": 32, 00:10:26.274 "min_cntlid": 1, 00:10:26.274 "max_cntlid": 65519, 00:10:26.274 "namespaces": [ 00:10:26.274 { 00:10:26.274 "nsid": 1, 00:10:26.274 "bdev_name": "Null1", 00:10:26.274 "name": "Null1", 00:10:26.274 "nguid": "C7D0B2B58E96486BA8CC39A1D3DCBDC1", 00:10:26.274 "uuid": "c7d0b2b5-8e96-486b-a8cc-39a1d3dcbdc1" 00:10:26.274 } 00:10:26.274 ] 00:10:26.274 }, 00:10:26.274 { 00:10:26.274 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:26.274 "subtype": "NVMe", 00:10:26.274 "listen_addresses": [ 00:10:26.274 { 00:10:26.274 "trtype": "TCP", 00:10:26.274 "adrfam": "IPv4", 00:10:26.274 "traddr": "10.0.0.2", 00:10:26.274 "trsvcid": "4420" 00:10:26.274 } 00:10:26.274 ], 00:10:26.274 "allow_any_host": true, 00:10:26.274 "hosts": [], 00:10:26.274 "serial_number": "SPDK00000000000002", 00:10:26.274 "model_number": "SPDK bdev Controller", 00:10:26.274 "max_namespaces": 32, 00:10:26.274 "min_cntlid": 1, 00:10:26.274 "max_cntlid": 65519, 00:10:26.274 "namespaces": [ 00:10:26.274 { 00:10:26.274 "nsid": 1, 00:10:26.274 "bdev_name": "Null2", 00:10:26.274 "name": "Null2", 00:10:26.274 "nguid": "FE53268084204780A2D2D62311492831", 00:10:26.274 "uuid": "fe532680-8420-4780-a2d2-d62311492831" 00:10:26.274 } 00:10:26.274 ] 00:10:26.274 }, 00:10:26.274 { 00:10:26.274 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:26.274 "subtype": "NVMe", 00:10:26.274 "listen_addresses": [ 00:10:26.274 { 00:10:26.274 "trtype": "TCP", 00:10:26.274 "adrfam": "IPv4", 00:10:26.274 "traddr": "10.0.0.2", 00:10:26.274 "trsvcid": "4420" 00:10:26.274 } 00:10:26.274 ], 00:10:26.274 "allow_any_host": true, 00:10:26.274 "hosts": [], 00:10:26.274 "serial_number": "SPDK00000000000003", 00:10:26.274 "model_number": "SPDK bdev Controller", 00:10:26.274 "max_namespaces": 32, 00:10:26.274 "min_cntlid": 1, 00:10:26.274 "max_cntlid": 65519, 00:10:26.274 "namespaces": [ 00:10:26.274 { 00:10:26.274 "nsid": 1, 00:10:26.274 "bdev_name": "Null3", 00:10:26.274 "name": "Null3", 00:10:26.274 "nguid": "DDB049DEA9794256BFCEA6DC37585C0E", 00:10:26.274 "uuid": "ddb049de-a979-4256-bfce-a6dc37585c0e" 00:10:26.274 } 00:10:26.274 ] 00:10:26.274 }, 00:10:26.274 { 00:10:26.274 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:26.274 "subtype": "NVMe", 00:10:26.274 "listen_addresses": [ 00:10:26.274 { 00:10:26.274 "trtype": "TCP", 00:10:26.274 "adrfam": "IPv4", 00:10:26.274 "traddr": "10.0.0.2", 00:10:26.274 "trsvcid": "4420" 00:10:26.274 } 00:10:26.274 ], 00:10:26.274 "allow_any_host": true, 00:10:26.274 "hosts": [], 00:10:26.274 "serial_number": "SPDK00000000000004", 00:10:26.274 "model_number": "SPDK bdev Controller", 00:10:26.274 "max_namespaces": 32, 00:10:26.274 "min_cntlid": 1, 00:10:26.274 "max_cntlid": 65519, 00:10:26.274 "namespaces": [ 00:10:26.274 { 00:10:26.274 "nsid": 1, 00:10:26.274 "bdev_name": "Null4", 00:10:26.274 "name": "Null4", 00:10:26.274 "nguid": "53087C58930847C6B5A9BA1D07209A77", 00:10:26.274 "uuid": "53087c58-9308-47c6-b5a9-ba1d07209a77" 00:10:26.274 } 00:10:26.274 ] 00:10:26.274 } 00:10:26.275 ] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.275 rmmod nvme_tcp 00:10:26.275 rmmod nvme_fabrics 00:10:26.275 rmmod nvme_keyring 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3915883 ']' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3915883 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3915883 ']' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3915883 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.275 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915883 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915883' 00:10:26.534 killing process with pid 3915883 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3915883 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3915883 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.534 20:51:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.082 00:10:29.082 real 0m5.775s 00:10:29.082 user 0m4.903s 00:10:29.082 sys 0m1.972s 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.082 ************************************ 00:10:29.082 END TEST nvmf_target_discovery 00:10:29.082 ************************************ 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.082 ************************************ 00:10:29.082 START TEST nvmf_referrals 00:10:29.082 ************************************ 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:29.082 * Looking for test storage... 00:10:29.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.082 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:29.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.083 --rc genhtml_branch_coverage=1 00:10:29.083 --rc genhtml_function_coverage=1 00:10:29.083 --rc genhtml_legend=1 00:10:29.083 --rc geninfo_all_blocks=1 00:10:29.083 --rc geninfo_unexecuted_blocks=1 00:10:29.083 00:10:29.083 ' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.083 --rc genhtml_branch_coverage=1 00:10:29.083 --rc genhtml_function_coverage=1 00:10:29.083 --rc genhtml_legend=1 00:10:29.083 --rc geninfo_all_blocks=1 00:10:29.083 --rc geninfo_unexecuted_blocks=1 00:10:29.083 00:10:29.083 ' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.083 --rc genhtml_branch_coverage=1 00:10:29.083 --rc genhtml_function_coverage=1 00:10:29.083 --rc genhtml_legend=1 00:10:29.083 --rc geninfo_all_blocks=1 00:10:29.083 --rc geninfo_unexecuted_blocks=1 00:10:29.083 00:10:29.083 ' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.083 --rc genhtml_branch_coverage=1 00:10:29.083 --rc genhtml_function_coverage=1 00:10:29.083 --rc genhtml_legend=1 00:10:29.083 --rc geninfo_all_blocks=1 00:10:29.083 --rc geninfo_unexecuted_blocks=1 00:10:29.083 00:10:29.083 ' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.083 20:51:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.015 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:10:31.016 00:10:31.016 --- 10.0.0.2 ping statistics --- 00:10:31.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.016 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:31.016 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:31.299 00:10:31.299 --- 10.0.0.1 ping statistics --- 00:10:31.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.299 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3918000 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3918000 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3918000 ']' 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.299 20:51:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.299 [2024-11-26 20:51:22.018139] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:31.299 [2024-11-26 20:51:22.018232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.299 [2024-11-26 20:51:22.092021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.299 [2024-11-26 20:51:22.152809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.299 [2024-11-26 20:51:22.152879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.299 [2024-11-26 20:51:22.152908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.299 [2024-11-26 20:51:22.152919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.299 [2024-11-26 20:51:22.152929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.299 [2024-11-26 20:51:22.154466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.299 [2024-11-26 20:51:22.154526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.299 [2024-11-26 20:51:22.154590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.299 [2024-11-26 20:51:22.154593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 [2024-11-26 20:51:22.299234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 [2024-11-26 20:51:22.324887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:31.560 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:31.824 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:31.825 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:31.825 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:31.825 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.082 20:51:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.339 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.597 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.855 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.113 20:51:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:33.113 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:33.113 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:33.113 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.113 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:33.370 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.628 rmmod nvme_tcp 00:10:33.628 rmmod nvme_fabrics 00:10:33.628 rmmod nvme_keyring 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3918000 ']' 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3918000 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3918000 ']' 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3918000 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3918000 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3918000' 00:10:33.628 killing process with pid 3918000 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3918000 00:10:33.628 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3918000 00:10:33.885 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.885 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.885 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.886 20:51:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.420 00:10:36.420 real 0m7.179s 00:10:36.420 user 0m11.480s 00:10:36.420 sys 0m2.324s 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.420 ************************************ 00:10:36.420 END TEST nvmf_referrals 00:10:36.420 ************************************ 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.420 ************************************ 00:10:36.420 START TEST nvmf_connect_disconnect 00:10:36.420 ************************************ 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:36.420 * Looking for test storage... 00:10:36.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.420 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.421 --rc genhtml_branch_coverage=1 00:10:36.421 --rc genhtml_function_coverage=1 00:10:36.421 --rc genhtml_legend=1 00:10:36.421 --rc geninfo_all_blocks=1 00:10:36.421 --rc geninfo_unexecuted_blocks=1 00:10:36.421 00:10:36.421 ' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.421 --rc genhtml_branch_coverage=1 00:10:36.421 --rc genhtml_function_coverage=1 00:10:36.421 --rc genhtml_legend=1 00:10:36.421 --rc geninfo_all_blocks=1 00:10:36.421 --rc geninfo_unexecuted_blocks=1 00:10:36.421 00:10:36.421 ' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.421 --rc genhtml_branch_coverage=1 00:10:36.421 --rc genhtml_function_coverage=1 00:10:36.421 --rc genhtml_legend=1 00:10:36.421 --rc geninfo_all_blocks=1 00:10:36.421 --rc geninfo_unexecuted_blocks=1 00:10:36.421 00:10:36.421 ' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.421 --rc genhtml_branch_coverage=1 00:10:36.421 --rc genhtml_function_coverage=1 00:10:36.421 --rc genhtml_legend=1 00:10:36.421 --rc geninfo_all_blocks=1 00:10:36.421 --rc geninfo_unexecuted_blocks=1 00:10:36.421 00:10:36.421 ' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.421 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.422 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.422 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.422 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.422 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.422 20:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.328 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.329 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.330 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.331 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.331 20:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:10:38.331 00:10:38.331 --- 10.0.0.2 ping statistics --- 00:10:38.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.331 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:38.331 00:10:38.331 --- 10.0.0.1 ping statistics --- 00:10:38.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.331 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.331 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3920400 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3920400 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3920400 ']' 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.332 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.332 [2024-11-26 20:51:29.141711] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:38.332 [2024-11-26 20:51:29.141787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.332 [2024-11-26 20:51:29.222381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.591 [2024-11-26 20:51:29.290573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.591 [2024-11-26 20:51:29.290641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.591 [2024-11-26 20:51:29.290657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.591 [2024-11-26 20:51:29.290670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.591 [2024-11-26 20:51:29.290681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.591 [2024-11-26 20:51:29.292456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.591 [2024-11-26 20:51:29.292484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.591 [2024-11-26 20:51:29.292543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.591 [2024-11-26 20:51:29.292546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 [2024-11-26 20:51:29.452494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.849 [2024-11-26 20:51:29.530859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.849 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.849 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:38.849 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:38.849 20:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:41.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.990 rmmod nvme_tcp 00:10:52.990 rmmod nvme_fabrics 00:10:52.990 rmmod nvme_keyring 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3920400 ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3920400 ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3920400' 00:10:52.990 killing process with pid 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3920400 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.990 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.991 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.991 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.991 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.991 20:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.898 00:10:54.898 real 0m18.832s 00:10:54.898 user 0m56.626s 00:10:54.898 sys 0m3.385s 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:54.898 ************************************ 00:10:54.898 END TEST nvmf_connect_disconnect 00:10:54.898 ************************************ 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.898 ************************************ 00:10:54.898 START TEST nvmf_multitarget 00:10:54.898 ************************************ 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:54.898 * Looking for test storage... 00:10:54.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:54.898 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.157 --rc genhtml_branch_coverage=1 00:10:55.157 --rc genhtml_function_coverage=1 00:10:55.157 --rc genhtml_legend=1 00:10:55.157 --rc geninfo_all_blocks=1 00:10:55.157 --rc geninfo_unexecuted_blocks=1 00:10:55.157 00:10:55.157 ' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.157 --rc genhtml_branch_coverage=1 00:10:55.157 --rc genhtml_function_coverage=1 00:10:55.157 --rc genhtml_legend=1 00:10:55.157 --rc geninfo_all_blocks=1 00:10:55.157 --rc geninfo_unexecuted_blocks=1 00:10:55.157 00:10:55.157 ' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.157 --rc genhtml_branch_coverage=1 00:10:55.157 --rc genhtml_function_coverage=1 00:10:55.157 --rc genhtml_legend=1 00:10:55.157 --rc geninfo_all_blocks=1 00:10:55.157 --rc geninfo_unexecuted_blocks=1 00:10:55.157 00:10:55.157 ' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.157 --rc genhtml_branch_coverage=1 00:10:55.157 --rc genhtml_function_coverage=1 00:10:55.157 --rc genhtml_legend=1 00:10:55.157 --rc geninfo_all_blocks=1 00:10:55.157 --rc geninfo_unexecuted_blocks=1 00:10:55.157 00:10:55.157 ' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.157 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.158 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.060 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:10:57.061 00:10:57.061 --- 10.0.0.2 ping statistics --- 00:10:57.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.061 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:10:57.061 00:10:57.061 --- 10.0.0.1 ping statistics --- 00:10:57.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.061 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.061 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3924042 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3924042 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3924042 ']' 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.320 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.320 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.320 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.320 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:57.320 [2024-11-26 20:51:48.051358] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:10:57.320 [2024-11-26 20:51:48.051440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.320 [2024-11-26 20:51:48.130303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.320 [2024-11-26 20:51:48.196410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.320 [2024-11-26 20:51:48.196471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.320 [2024-11-26 20:51:48.196487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.320 [2024-11-26 20:51:48.196500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.320 [2024-11-26 20:51:48.196512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.320 [2024-11-26 20:51:48.198180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.320 [2024-11-26 20:51:48.198216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.320 [2024-11-26 20:51:48.198251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.320 [2024-11-26 20:51:48.198255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:57.578 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:57.836 "nvmf_tgt_1" 00:10:57.836 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:57.836 "nvmf_tgt_2" 00:10:57.836 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:57.836 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:58.093 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:58.093 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:58.093 true 00:10:58.093 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:58.351 true 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.351 rmmod nvme_tcp 00:10:58.351 rmmod nvme_fabrics 00:10:58.351 rmmod nvme_keyring 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3924042 ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3924042 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3924042 ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3924042 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3924042 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.351 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3924042' 00:10:58.352 killing process with pid 3924042 00:10:58.352 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3924042 00:10:58.352 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3924042 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.609 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.610 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.610 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.610 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.610 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.610 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.138 00:11:01.138 real 0m5.872s 00:11:01.138 user 0m6.769s 00:11:01.138 sys 0m1.990s 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:01.138 ************************************ 00:11:01.138 END TEST nvmf_multitarget 00:11:01.138 ************************************ 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.138 ************************************ 00:11:01.138 START TEST nvmf_rpc 00:11:01.138 ************************************ 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:01.138 * Looking for test storage... 00:11:01.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.138 --rc genhtml_branch_coverage=1 00:11:01.138 --rc genhtml_function_coverage=1 00:11:01.138 --rc genhtml_legend=1 00:11:01.138 --rc geninfo_all_blocks=1 00:11:01.138 --rc geninfo_unexecuted_blocks=1 00:11:01.138 00:11:01.138 ' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.138 --rc genhtml_branch_coverage=1 00:11:01.138 --rc genhtml_function_coverage=1 00:11:01.138 --rc genhtml_legend=1 00:11:01.138 --rc geninfo_all_blocks=1 00:11:01.138 --rc geninfo_unexecuted_blocks=1 00:11:01.138 00:11:01.138 ' 00:11:01.138 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.139 --rc genhtml_branch_coverage=1 00:11:01.139 --rc genhtml_function_coverage=1 00:11:01.139 --rc genhtml_legend=1 00:11:01.139 --rc geninfo_all_blocks=1 00:11:01.139 --rc geninfo_unexecuted_blocks=1 00:11:01.139 00:11:01.139 ' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.139 --rc genhtml_branch_coverage=1 00:11:01.139 --rc genhtml_function_coverage=1 00:11:01.139 --rc genhtml_legend=1 00:11:01.139 --rc geninfo_all_blocks=1 00:11:01.139 --rc geninfo_unexecuted_blocks=1 00:11:01.139 00:11:01.139 ' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.139 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.039 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.040 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.299 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.299 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.299 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.299 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.299 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.299 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:11:03.299 00:11:03.299 --- 10.0.0.2 ping statistics --- 00:11:03.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.299 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:03.300 00:11:03.300 --- 10.0.0.1 ping statistics --- 00:11:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.300 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3926263 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3926263 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3926263 ']' 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.300 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.300 [2024-11-26 20:51:54.114761] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:03.300 [2024-11-26 20:51:54.114846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.300 [2024-11-26 20:51:54.189030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.559 [2024-11-26 20:51:54.249555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.559 [2024-11-26 20:51:54.249613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.559 [2024-11-26 20:51:54.249640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.559 [2024-11-26 20:51:54.249651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.559 [2024-11-26 20:51:54.249660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.559 [2024-11-26 20:51:54.251335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.559 [2024-11-26 20:51:54.251397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.559 [2024-11-26 20:51:54.251446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.559 [2024-11-26 20:51:54.251449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:03.559 "tick_rate": 2700000000, 00:11:03.559 "poll_groups": [ 00:11:03.559 { 00:11:03.559 "name": "nvmf_tgt_poll_group_000", 00:11:03.559 "admin_qpairs": 0, 00:11:03.559 "io_qpairs": 0, 00:11:03.559 "current_admin_qpairs": 0, 00:11:03.559 "current_io_qpairs": 0, 00:11:03.559 "pending_bdev_io": 0, 00:11:03.559 "completed_nvme_io": 0, 00:11:03.559 "transports": [] 00:11:03.559 }, 00:11:03.559 { 00:11:03.559 "name": "nvmf_tgt_poll_group_001", 00:11:03.559 "admin_qpairs": 0, 00:11:03.559 "io_qpairs": 0, 00:11:03.559 "current_admin_qpairs": 0, 00:11:03.559 "current_io_qpairs": 0, 00:11:03.559 "pending_bdev_io": 0, 00:11:03.559 "completed_nvme_io": 0, 00:11:03.559 "transports": [] 00:11:03.559 }, 00:11:03.559 { 00:11:03.559 "name": "nvmf_tgt_poll_group_002", 00:11:03.559 "admin_qpairs": 0, 00:11:03.559 "io_qpairs": 0, 00:11:03.559 "current_admin_qpairs": 0, 00:11:03.559 "current_io_qpairs": 0, 00:11:03.559 "pending_bdev_io": 0, 00:11:03.559 "completed_nvme_io": 0, 00:11:03.559 "transports": [] 00:11:03.559 }, 00:11:03.559 { 00:11:03.559 "name": "nvmf_tgt_poll_group_003", 00:11:03.559 "admin_qpairs": 0, 00:11:03.559 "io_qpairs": 0, 00:11:03.559 "current_admin_qpairs": 0, 00:11:03.559 "current_io_qpairs": 0, 00:11:03.559 "pending_bdev_io": 0, 00:11:03.559 "completed_nvme_io": 0, 00:11:03.559 "transports": [] 00:11:03.559 } 00:11:03.559 ] 00:11:03.559 }' 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.559 [2024-11-26 20:51:54.479944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.559 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:03.818 "tick_rate": 2700000000, 00:11:03.818 "poll_groups": [ 00:11:03.818 { 00:11:03.818 "name": "nvmf_tgt_poll_group_000", 00:11:03.818 "admin_qpairs": 0, 00:11:03.818 "io_qpairs": 0, 00:11:03.818 "current_admin_qpairs": 0, 00:11:03.818 "current_io_qpairs": 0, 00:11:03.818 "pending_bdev_io": 0, 00:11:03.818 "completed_nvme_io": 0, 00:11:03.818 "transports": [ 00:11:03.818 { 00:11:03.818 "trtype": "TCP" 00:11:03.818 } 00:11:03.818 ] 00:11:03.818 }, 00:11:03.818 { 00:11:03.818 "name": "nvmf_tgt_poll_group_001", 00:11:03.818 "admin_qpairs": 0, 00:11:03.818 "io_qpairs": 0, 00:11:03.818 "current_admin_qpairs": 0, 00:11:03.818 "current_io_qpairs": 0, 00:11:03.818 "pending_bdev_io": 0, 00:11:03.818 "completed_nvme_io": 0, 00:11:03.818 "transports": [ 00:11:03.818 { 00:11:03.818 "trtype": "TCP" 00:11:03.818 } 00:11:03.818 ] 00:11:03.818 }, 00:11:03.818 { 00:11:03.818 "name": "nvmf_tgt_poll_group_002", 00:11:03.818 "admin_qpairs": 0, 00:11:03.818 "io_qpairs": 0, 00:11:03.818 "current_admin_qpairs": 0, 00:11:03.818 "current_io_qpairs": 0, 00:11:03.818 "pending_bdev_io": 0, 00:11:03.818 "completed_nvme_io": 0, 00:11:03.818 "transports": [ 00:11:03.818 { 00:11:03.818 "trtype": "TCP" 00:11:03.818 } 00:11:03.818 ] 00:11:03.818 }, 00:11:03.818 { 00:11:03.818 "name": "nvmf_tgt_poll_group_003", 00:11:03.818 "admin_qpairs": 0, 00:11:03.818 "io_qpairs": 0, 00:11:03.818 "current_admin_qpairs": 0, 00:11:03.818 "current_io_qpairs": 0, 00:11:03.818 "pending_bdev_io": 0, 00:11:03.818 "completed_nvme_io": 0, 00:11:03.818 "transports": [ 00:11:03.818 { 00:11:03.818 "trtype": "TCP" 00:11:03.818 } 00:11:03.818 ] 00:11:03.818 } 00:11:03.818 ] 00:11:03.818 }' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 Malloc1 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 [2024-11-26 20:51:54.629634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:03.819 [2024-11-26 20:51:54.652220] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:03.819 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:03.819 could not add new controller: failed to write to nvme-fabrics device 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.819 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.385 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.385 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.385 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.385 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.385 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.913 [2024-11-26 20:51:57.415514] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:06.913 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:06.913 could not add new controller: failed to write to nvme-fabrics device 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.913 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.478 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.478 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.478 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.478 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.478 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 [2024-11-26 20:52:00.279616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.378 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.379 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.343 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.343 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.343 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.343 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.343 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.272 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.272 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.272 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.272 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 [2024-11-26 20:52:03.146125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.272 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.206 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.206 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.206 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.206 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.206 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.104 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 [2024-11-26 20:52:05.937097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.105 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.038 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.038 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.038 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.038 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.038 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.935 [2024-11-26 20:52:08.858323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.935 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.193 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.193 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.763 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.763 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.763 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.763 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.763 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:20.662 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 [2024-11-26 20:52:11.681652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.920 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.485 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.485 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.485 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.485 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.485 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:24.011 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:24.011 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:24.011 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 [2024-11-26 20:52:14.550858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 [2024-11-26 20:52:14.598915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 [2024-11-26 20:52:14.647107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.012 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.012 [2024-11-26 20:52:14.695275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 [2024-11-26 20:52:14.743421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:24.013 "tick_rate": 2700000000, 00:11:24.013 "poll_groups": [ 00:11:24.013 { 00:11:24.013 "name": "nvmf_tgt_poll_group_000", 00:11:24.013 "admin_qpairs": 2, 00:11:24.013 "io_qpairs": 84, 00:11:24.013 "current_admin_qpairs": 0, 00:11:24.013 "current_io_qpairs": 0, 00:11:24.013 "pending_bdev_io": 0, 00:11:24.013 "completed_nvme_io": 87, 00:11:24.013 "transports": [ 00:11:24.013 { 00:11:24.013 "trtype": "TCP" 00:11:24.013 } 00:11:24.013 ] 00:11:24.013 }, 00:11:24.013 { 00:11:24.013 "name": "nvmf_tgt_poll_group_001", 00:11:24.013 "admin_qpairs": 2, 00:11:24.013 "io_qpairs": 84, 00:11:24.013 "current_admin_qpairs": 0, 00:11:24.013 "current_io_qpairs": 0, 00:11:24.013 "pending_bdev_io": 0, 00:11:24.013 "completed_nvme_io": 183, 00:11:24.013 "transports": [ 00:11:24.013 { 00:11:24.013 "trtype": "TCP" 00:11:24.013 } 00:11:24.013 ] 00:11:24.013 }, 00:11:24.013 { 00:11:24.013 "name": "nvmf_tgt_poll_group_002", 00:11:24.013 "admin_qpairs": 1, 00:11:24.013 "io_qpairs": 84, 00:11:24.013 "current_admin_qpairs": 0, 00:11:24.013 "current_io_qpairs": 0, 00:11:24.013 "pending_bdev_io": 0, 00:11:24.013 "completed_nvme_io": 233, 00:11:24.013 "transports": [ 00:11:24.013 { 00:11:24.013 "trtype": "TCP" 00:11:24.013 } 00:11:24.013 ] 00:11:24.013 }, 00:11:24.013 { 00:11:24.013 "name": "nvmf_tgt_poll_group_003", 00:11:24.013 "admin_qpairs": 2, 00:11:24.013 "io_qpairs": 84, 00:11:24.013 "current_admin_qpairs": 0, 00:11:24.013 "current_io_qpairs": 0, 00:11:24.013 "pending_bdev_io": 0, 00:11:24.013 "completed_nvme_io": 183, 00:11:24.013 "transports": [ 00:11:24.013 { 00:11:24.013 "trtype": "TCP" 00:11:24.013 } 00:11:24.013 ] 00:11:24.013 } 00:11:24.013 ] 00:11:24.013 }' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.013 rmmod nvme_tcp 00:11:24.013 rmmod nvme_fabrics 00:11:24.013 rmmod nvme_keyring 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3926263 ']' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3926263 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3926263 ']' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3926263 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.013 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3926263 00:11:24.271 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.271 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.271 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3926263' 00:11:24.271 killing process with pid 3926263 00:11:24.271 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3926263 00:11:24.271 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3926263 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.531 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.434 00:11:26.434 real 0m25.688s 00:11:26.434 user 1m23.076s 00:11:26.434 sys 0m4.439s 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.434 ************************************ 00:11:26.434 END TEST nvmf_rpc 00:11:26.434 ************************************ 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.434 ************************************ 00:11:26.434 START TEST nvmf_invalid 00:11:26.434 ************************************ 00:11:26.434 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:26.694 * Looking for test storage... 00:11:26.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.694 --rc genhtml_branch_coverage=1 00:11:26.694 --rc genhtml_function_coverage=1 00:11:26.694 --rc genhtml_legend=1 00:11:26.694 --rc geninfo_all_blocks=1 00:11:26.694 --rc geninfo_unexecuted_blocks=1 00:11:26.694 00:11:26.694 ' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.694 --rc genhtml_branch_coverage=1 00:11:26.694 --rc genhtml_function_coverage=1 00:11:26.694 --rc genhtml_legend=1 00:11:26.694 --rc geninfo_all_blocks=1 00:11:26.694 --rc geninfo_unexecuted_blocks=1 00:11:26.694 00:11:26.694 ' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.694 --rc genhtml_branch_coverage=1 00:11:26.694 --rc genhtml_function_coverage=1 00:11:26.694 --rc genhtml_legend=1 00:11:26.694 --rc geninfo_all_blocks=1 00:11:26.694 --rc geninfo_unexecuted_blocks=1 00:11:26.694 00:11:26.694 ' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.694 --rc genhtml_branch_coverage=1 00:11:26.694 --rc genhtml_function_coverage=1 00:11:26.694 --rc genhtml_legend=1 00:11:26.694 --rc geninfo_all_blocks=1 00:11:26.694 --rc geninfo_unexecuted_blocks=1 00:11:26.694 00:11:26.694 ' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.694 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.695 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.596 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.597 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.597 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:11:28.597 00:11:28.597 --- 10.0.0.2 ping statistics --- 00:11:28.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.597 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:11:28.597 00:11:28.597 --- 10.0.0.1 ping statistics --- 00:11:28.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.597 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.597 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3930783 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3930783 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3930783 ']' 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.856 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:28.856 [2024-11-26 20:52:19.598538] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:28.856 [2024-11-26 20:52:19.598641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.856 [2024-11-26 20:52:19.683635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.856 [2024-11-26 20:52:19.751279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.856 [2024-11-26 20:52:19.751350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.856 [2024-11-26 20:52:19.751366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.856 [2024-11-26 20:52:19.751389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.856 [2024-11-26 20:52:19.751401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.856 [2024-11-26 20:52:19.753193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.856 [2024-11-26 20:52:19.753249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.856 [2024-11-26 20:52:19.753302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.856 [2024-11-26 20:52:19.753306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:29.114 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15482 00:11:29.374 [2024-11-26 20:52:20.209950] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:29.374 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:29.374 { 00:11:29.374 "nqn": "nqn.2016-06.io.spdk:cnode15482", 00:11:29.374 "tgt_name": "foobar", 00:11:29.374 "method": "nvmf_create_subsystem", 00:11:29.374 "req_id": 1 00:11:29.374 } 00:11:29.374 Got JSON-RPC error response 00:11:29.374 response: 00:11:29.374 { 00:11:29.374 "code": -32603, 00:11:29.374 "message": "Unable to find target foobar" 00:11:29.374 }' 00:11:29.374 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:29.374 { 00:11:29.375 "nqn": "nqn.2016-06.io.spdk:cnode15482", 00:11:29.375 "tgt_name": "foobar", 00:11:29.375 "method": "nvmf_create_subsystem", 00:11:29.375 "req_id": 1 00:11:29.375 } 00:11:29.375 Got JSON-RPC error response 00:11:29.375 response: 00:11:29.375 { 00:11:29.375 "code": -32603, 00:11:29.375 "message": "Unable to find target foobar" 00:11:29.375 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:29.375 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:29.375 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29270 00:11:29.633 [2024-11-26 20:52:20.519013] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29270: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:29.633 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:29.633 { 00:11:29.633 "nqn": "nqn.2016-06.io.spdk:cnode29270", 00:11:29.633 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:29.633 "method": "nvmf_create_subsystem", 00:11:29.633 "req_id": 1 00:11:29.633 } 00:11:29.633 Got JSON-RPC error response 00:11:29.633 response: 00:11:29.633 { 00:11:29.633 "code": -32602, 00:11:29.633 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:29.633 }' 00:11:29.633 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:29.633 { 00:11:29.633 "nqn": "nqn.2016-06.io.spdk:cnode29270", 00:11:29.633 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:29.633 "method": "nvmf_create_subsystem", 00:11:29.633 "req_id": 1 00:11:29.633 } 00:11:29.633 Got JSON-RPC error response 00:11:29.633 response: 00:11:29.633 { 00:11:29.633 "code": -32602, 00:11:29.633 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:29.633 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:29.633 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:29.633 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31769 00:11:29.891 [2024-11-26 20:52:20.791906] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31769: invalid model number 'SPDK_Controller' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:29.891 { 00:11:29.891 "nqn": "nqn.2016-06.io.spdk:cnode31769", 00:11:29.891 "model_number": "SPDK_Controller\u001f", 00:11:29.891 "method": "nvmf_create_subsystem", 00:11:29.891 "req_id": 1 00:11:29.891 } 00:11:29.891 Got JSON-RPC error response 00:11:29.891 response: 00:11:29.891 { 00:11:29.891 "code": -32602, 00:11:29.891 "message": "Invalid MN SPDK_Controller\u001f" 00:11:29.891 }' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:29.891 { 00:11:29.891 "nqn": "nqn.2016-06.io.spdk:cnode31769", 00:11:29.891 "model_number": "SPDK_Controller\u001f", 00:11:29.891 "method": "nvmf_create_subsystem", 00:11:29.891 "req_id": 1 00:11:29.891 } 00:11:29.891 Got JSON-RPC error response 00:11:29.891 response: 00:11:29.891 { 00:11:29.891 "code": -32602, 00:11:29.891 "message": "Invalid MN SPDK_Controller\u001f" 00:11:29.891 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:29.891 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.149 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:11:30.150 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^-Qo\anS6E,g/lKlZiPjbRy' 00:11:30.669 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\mmn-v~N^pf3k"|]2p>-Qo\anS6E,g/lKlZiPjbRy' nqn.2016-06.io.spdk:cnode7092 00:11:30.927 [2024-11-26 20:52:21.654762] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7092: invalid model number '\mmn-v~N^pf3k"|]2p>-Qo\anS6E,g/lKlZiPjbRy' 00:11:30.927 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:30.927 { 00:11:30.927 "nqn": "nqn.2016-06.io.spdk:cnode7092", 00:11:30.927 "model_number": "\\mmn-v~N^pf3k\"|]2p>-Qo\\anS6E,g/lKlZiPjbRy", 00:11:30.927 "method": "nvmf_create_subsystem", 00:11:30.927 "req_id": 1 00:11:30.927 } 00:11:30.927 Got JSON-RPC error response 00:11:30.927 response: 00:11:30.927 { 00:11:30.927 "code": -32602, 00:11:30.927 "message": "Invalid MN \\mmn-v~N^pf3k\"|]2p>-Qo\\anS6E,g/lKlZiPjbRy" 00:11:30.927 }' 00:11:30.927 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:30.927 { 00:11:30.927 "nqn": "nqn.2016-06.io.spdk:cnode7092", 00:11:30.927 "model_number": "\\mmn-v~N^pf3k\"|]2p>-Qo\\anS6E,g/lKlZiPjbRy", 00:11:30.927 "method": "nvmf_create_subsystem", 00:11:30.927 "req_id": 1 00:11:30.927 } 00:11:30.927 Got JSON-RPC error response 00:11:30.927 response: 00:11:30.927 { 00:11:30.927 "code": -32602, 00:11:30.927 "message": "Invalid MN \\mmn-v~N^pf3k\"|]2p>-Qo\\anS6E,g/lKlZiPjbRy" 00:11:30.927 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:30.927 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:31.184 [2024-11-26 20:52:21.923728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.184 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:31.442 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:31.442 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:31.442 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:31.442 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:31.442 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:31.700 [2024-11-26 20:52:22.477500] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:31.700 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:31.700 { 00:11:31.700 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:31.700 "listen_address": { 00:11:31.700 "trtype": "tcp", 00:11:31.700 "traddr": "", 00:11:31.700 "trsvcid": "4421" 00:11:31.700 }, 00:11:31.700 "method": "nvmf_subsystem_remove_listener", 00:11:31.700 "req_id": 1 00:11:31.700 } 00:11:31.700 Got JSON-RPC error response 00:11:31.700 response: 00:11:31.700 { 00:11:31.700 "code": -32602, 00:11:31.700 "message": "Invalid parameters" 00:11:31.700 }' 00:11:31.700 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:31.700 { 00:11:31.700 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:31.700 "listen_address": { 00:11:31.700 "trtype": "tcp", 00:11:31.700 "traddr": "", 00:11:31.700 "trsvcid": "4421" 00:11:31.700 }, 00:11:31.700 "method": "nvmf_subsystem_remove_listener", 00:11:31.700 "req_id": 1 00:11:31.700 } 00:11:31.700 Got JSON-RPC error response 00:11:31.700 response: 00:11:31.700 { 00:11:31.700 "code": -32602, 00:11:31.700 "message": "Invalid parameters" 00:11:31.700 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:31.700 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7332 -i 0 00:11:31.957 [2024-11-26 20:52:22.754344] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7332: invalid cntlid range [0-65519] 00:11:31.957 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:31.957 { 00:11:31.957 "nqn": "nqn.2016-06.io.spdk:cnode7332", 00:11:31.957 "min_cntlid": 0, 00:11:31.957 "method": "nvmf_create_subsystem", 00:11:31.957 "req_id": 1 00:11:31.957 } 00:11:31.957 Got JSON-RPC error response 00:11:31.957 response: 00:11:31.957 { 00:11:31.957 "code": -32602, 00:11:31.957 "message": "Invalid cntlid range [0-65519]" 00:11:31.957 }' 00:11:31.957 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:31.957 { 00:11:31.957 "nqn": "nqn.2016-06.io.spdk:cnode7332", 00:11:31.957 "min_cntlid": 0, 00:11:31.957 "method": "nvmf_create_subsystem", 00:11:31.957 "req_id": 1 00:11:31.957 } 00:11:31.957 Got JSON-RPC error response 00:11:31.957 response: 00:11:31.957 { 00:11:31.957 "code": -32602, 00:11:31.957 "message": "Invalid cntlid range [0-65519]" 00:11:31.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:31.957 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13348 -i 65520 00:11:32.215 [2024-11-26 20:52:23.027315] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13348: invalid cntlid range [65520-65519] 00:11:32.215 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:32.215 { 00:11:32.215 "nqn": "nqn.2016-06.io.spdk:cnode13348", 00:11:32.215 "min_cntlid": 65520, 00:11:32.215 "method": "nvmf_create_subsystem", 00:11:32.215 "req_id": 1 00:11:32.215 } 00:11:32.215 Got JSON-RPC error response 00:11:32.215 response: 00:11:32.215 { 00:11:32.215 "code": -32602, 00:11:32.215 "message": "Invalid cntlid range [65520-65519]" 00:11:32.215 }' 00:11:32.215 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:32.215 { 00:11:32.215 "nqn": "nqn.2016-06.io.spdk:cnode13348", 00:11:32.215 "min_cntlid": 65520, 00:11:32.215 "method": "nvmf_create_subsystem", 00:11:32.215 "req_id": 1 00:11:32.215 } 00:11:32.215 Got JSON-RPC error response 00:11:32.215 response: 00:11:32.215 { 00:11:32.215 "code": -32602, 00:11:32.215 "message": "Invalid cntlid range [65520-65519]" 00:11:32.215 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:32.215 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode914 -I 0 00:11:32.473 [2024-11-26 20:52:23.284146] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode914: invalid cntlid range [1-0] 00:11:32.473 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:32.473 { 00:11:32.473 "nqn": "nqn.2016-06.io.spdk:cnode914", 00:11:32.473 "max_cntlid": 0, 00:11:32.473 "method": "nvmf_create_subsystem", 00:11:32.473 "req_id": 1 00:11:32.473 } 00:11:32.473 Got JSON-RPC error response 00:11:32.473 response: 00:11:32.473 { 00:11:32.473 "code": -32602, 00:11:32.473 "message": "Invalid cntlid range [1-0]" 00:11:32.473 }' 00:11:32.473 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:32.473 { 00:11:32.473 "nqn": "nqn.2016-06.io.spdk:cnode914", 00:11:32.473 "max_cntlid": 0, 00:11:32.473 "method": "nvmf_create_subsystem", 00:11:32.473 "req_id": 1 00:11:32.473 } 00:11:32.473 Got JSON-RPC error response 00:11:32.473 response: 00:11:32.473 { 00:11:32.473 "code": -32602, 00:11:32.473 "message": "Invalid cntlid range [1-0]" 00:11:32.473 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:32.473 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23299 -I 65520 00:11:32.731 [2024-11-26 20:52:23.569097] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23299: invalid cntlid range [1-65520] 00:11:32.731 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:32.731 { 00:11:32.731 "nqn": "nqn.2016-06.io.spdk:cnode23299", 00:11:32.731 "max_cntlid": 65520, 00:11:32.731 "method": "nvmf_create_subsystem", 00:11:32.731 "req_id": 1 00:11:32.731 } 00:11:32.731 Got JSON-RPC error response 00:11:32.731 response: 00:11:32.731 { 00:11:32.731 "code": -32602, 00:11:32.731 "message": "Invalid cntlid range [1-65520]" 00:11:32.731 }' 00:11:32.731 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:32.731 { 00:11:32.731 "nqn": "nqn.2016-06.io.spdk:cnode23299", 00:11:32.731 "max_cntlid": 65520, 00:11:32.731 "method": "nvmf_create_subsystem", 00:11:32.731 "req_id": 1 00:11:32.731 } 00:11:32.731 Got JSON-RPC error response 00:11:32.731 response: 00:11:32.731 { 00:11:32.731 "code": -32602, 00:11:32.731 "message": "Invalid cntlid range [1-65520]" 00:11:32.731 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:32.731 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5156 -i 6 -I 5 00:11:32.989 [2024-11-26 20:52:23.838031] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5156: invalid cntlid range [6-5] 00:11:32.989 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:32.989 { 00:11:32.989 "nqn": "nqn.2016-06.io.spdk:cnode5156", 00:11:32.989 "min_cntlid": 6, 00:11:32.989 "max_cntlid": 5, 00:11:32.989 "method": "nvmf_create_subsystem", 00:11:32.989 "req_id": 1 00:11:32.989 } 00:11:32.989 Got JSON-RPC error response 00:11:32.989 response: 00:11:32.989 { 00:11:32.989 "code": -32602, 00:11:32.989 "message": "Invalid cntlid range [6-5]" 00:11:32.989 }' 00:11:32.989 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:32.989 { 00:11:32.989 "nqn": "nqn.2016-06.io.spdk:cnode5156", 00:11:32.989 "min_cntlid": 6, 00:11:32.989 "max_cntlid": 5, 00:11:32.989 "method": "nvmf_create_subsystem", 00:11:32.989 "req_id": 1 00:11:32.989 } 00:11:32.989 Got JSON-RPC error response 00:11:32.989 response: 00:11:32.989 { 00:11:32.989 "code": -32602, 00:11:32.989 "message": "Invalid cntlid range [6-5]" 00:11:32.989 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:32.990 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:33.247 { 00:11:33.247 "name": "foobar", 00:11:33.247 "method": "nvmf_delete_target", 00:11:33.247 "req_id": 1 00:11:33.247 } 00:11:33.247 Got JSON-RPC error response 00:11:33.247 response: 00:11:33.247 { 00:11:33.247 "code": -32602, 00:11:33.247 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:33.247 }' 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:33.247 { 00:11:33.247 "name": "foobar", 00:11:33.247 "method": "nvmf_delete_target", 00:11:33.247 "req_id": 1 00:11:33.247 } 00:11:33.247 Got JSON-RPC error response 00:11:33.247 response: 00:11:33.247 { 00:11:33.247 "code": -32602, 00:11:33.247 "message": "The specified target doesn't exist, cannot delete it." 00:11:33.247 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.247 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.247 rmmod nvme_tcp 00:11:33.247 rmmod nvme_fabrics 00:11:33.247 rmmod nvme_keyring 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3930783 ']' 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3930783 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3930783 ']' 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3930783 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.247 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930783 00:11:33.248 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.248 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.248 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930783' 00:11:33.248 killing process with pid 3930783 00:11:33.248 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3930783 00:11:33.248 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3930783 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.507 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.412 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.412 00:11:35.412 real 0m9.005s 00:11:35.412 user 0m22.333s 00:11:35.412 sys 0m2.363s 00:11:35.412 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.412 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:35.412 ************************************ 00:11:35.412 END TEST nvmf_invalid 00:11:35.412 ************************************ 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.672 ************************************ 00:11:35.672 START TEST nvmf_connect_stress 00:11:35.672 ************************************ 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:35.672 * Looking for test storage... 00:11:35.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.672 --rc genhtml_branch_coverage=1 00:11:35.672 --rc genhtml_function_coverage=1 00:11:35.672 --rc genhtml_legend=1 00:11:35.672 --rc geninfo_all_blocks=1 00:11:35.672 --rc geninfo_unexecuted_blocks=1 00:11:35.672 00:11:35.672 ' 00:11:35.672 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.673 --rc genhtml_branch_coverage=1 00:11:35.673 --rc genhtml_function_coverage=1 00:11:35.673 --rc genhtml_legend=1 00:11:35.673 --rc geninfo_all_blocks=1 00:11:35.673 --rc geninfo_unexecuted_blocks=1 00:11:35.673 00:11:35.673 ' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.673 --rc genhtml_branch_coverage=1 00:11:35.673 --rc genhtml_function_coverage=1 00:11:35.673 --rc genhtml_legend=1 00:11:35.673 --rc geninfo_all_blocks=1 00:11:35.673 --rc geninfo_unexecuted_blocks=1 00:11:35.673 00:11:35.673 ' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.673 --rc genhtml_branch_coverage=1 00:11:35.673 --rc genhtml_function_coverage=1 00:11:35.673 --rc genhtml_legend=1 00:11:35.673 --rc geninfo_all_blocks=1 00:11:35.673 --rc geninfo_unexecuted_blocks=1 00:11:35.673 00:11:35.673 ' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.673 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:38.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:38.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.205 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:38.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:38.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:11:38.206 00:11:38.206 --- 10.0.0.2 ping statistics --- 00:11:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.206 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:11:38.206 00:11:38.206 --- 10.0.0.1 ping statistics --- 00:11:38.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.206 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3933428 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3933428 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3933428 ']' 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.206 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.206 [2024-11-26 20:52:28.854162] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:38.206 [2024-11-26 20:52:28.854266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.206 [2024-11-26 20:52:28.940787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.206 [2024-11-26 20:52:29.007638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.206 [2024-11-26 20:52:29.007720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.206 [2024-11-26 20:52:29.007738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.206 [2024-11-26 20:52:29.007751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.206 [2024-11-26 20:52:29.007763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.206 [2024-11-26 20:52:29.009406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.206 [2024-11-26 20:52:29.009460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.206 [2024-11-26 20:52:29.009463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.206 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.206 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:38.206 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.206 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.206 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.465 [2024-11-26 20:52:29.166641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.465 [2024-11-26 20:52:29.184443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.465 NULL1 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3933543 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.465 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.722 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:38.722 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.722 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.722 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.979 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.979 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:38.979 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.979 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.979 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.544 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.544 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:39.544 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.544 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.544 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.802 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.802 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:39.802 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.802 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.802 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.060 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.060 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:40.060 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.060 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.060 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.317 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.317 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:40.317 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.317 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.317 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.574 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.574 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:40.574 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.574 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.574 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.139 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.139 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:41.139 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.139 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.139 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.396 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.396 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:41.396 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.396 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.396 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.654 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.654 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:41.654 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.654 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.654 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.912 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.912 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:41.912 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.912 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.912 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.169 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.169 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:42.170 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.170 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.170 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.735 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.735 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:42.735 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.735 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.735 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.992 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.992 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:42.992 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.992 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.992 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.250 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.250 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:43.250 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.250 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.250 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.544 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:43.544 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.544 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.544 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.824 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.824 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:43.824 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.824 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.824 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.388 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.388 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:44.388 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.388 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.388 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.644 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.644 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:44.644 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.644 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.644 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.900 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.900 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:44.900 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.900 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.900 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.156 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.156 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:45.156 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.156 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.156 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.413 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.413 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:45.413 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.413 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.413 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.977 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.977 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:45.977 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.977 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.977 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.234 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.234 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:46.234 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.234 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.234 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.490 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.490 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:46.490 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.490 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.490 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.748 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.748 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:46.748 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.748 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.748 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.006 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.006 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:47.006 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.006 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.006 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:47.571 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.571 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.830 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:47.830 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.830 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.087 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.087 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:48.087 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.087 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.087 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.345 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:48.345 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.345 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.345 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.603 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3933543 00:11:48.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3933543) - No such process 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3933543 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.603 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.603 rmmod nvme_tcp 00:11:48.861 rmmod nvme_fabrics 00:11:48.861 rmmod nvme_keyring 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3933428 ']' 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3933428 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3933428 ']' 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3933428 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3933428 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:48.861 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3933428' 00:11:48.861 killing process with pid 3933428 00:11:48.862 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3933428 00:11:48.862 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3933428 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.120 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.027 00:11:51.027 real 0m15.525s 00:11:51.027 user 0m38.983s 00:11:51.027 sys 0m5.732s 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.027 ************************************ 00:11:51.027 END TEST nvmf_connect_stress 00:11:51.027 ************************************ 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.027 20:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.285 ************************************ 00:11:51.285 START TEST nvmf_fused_ordering 00:11:51.285 ************************************ 00:11:51.285 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:51.285 * Looking for test storage... 00:11:51.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.286 --rc genhtml_branch_coverage=1 00:11:51.286 --rc genhtml_function_coverage=1 00:11:51.286 --rc genhtml_legend=1 00:11:51.286 --rc geninfo_all_blocks=1 00:11:51.286 --rc geninfo_unexecuted_blocks=1 00:11:51.286 00:11:51.286 ' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.286 --rc genhtml_branch_coverage=1 00:11:51.286 --rc genhtml_function_coverage=1 00:11:51.286 --rc genhtml_legend=1 00:11:51.286 --rc geninfo_all_blocks=1 00:11:51.286 --rc geninfo_unexecuted_blocks=1 00:11:51.286 00:11:51.286 ' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.286 --rc genhtml_branch_coverage=1 00:11:51.286 --rc genhtml_function_coverage=1 00:11:51.286 --rc genhtml_legend=1 00:11:51.286 --rc geninfo_all_blocks=1 00:11:51.286 --rc geninfo_unexecuted_blocks=1 00:11:51.286 00:11:51.286 ' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.286 --rc genhtml_branch_coverage=1 00:11:51.286 --rc genhtml_function_coverage=1 00:11:51.286 --rc genhtml_legend=1 00:11:51.286 --rc geninfo_all_blocks=1 00:11:51.286 --rc geninfo_unexecuted_blocks=1 00:11:51.286 00:11:51.286 ' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:51.286 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.287 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.820 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:53.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:53.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:53.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:53.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:53.821 00:11:53.821 --- 10.0.0.2 ping statistics --- 00:11:53.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.821 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:53.821 00:11:53.821 --- 10.0.0.1 ping statistics --- 00:11:53.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.821 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3936729 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3936729 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3936729 ']' 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.821 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 [2024-11-26 20:52:44.346250] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:53.822 [2024-11-26 20:52:44.346332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.822 [2024-11-26 20:52:44.425109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.822 [2024-11-26 20:52:44.488552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.822 [2024-11-26 20:52:44.488608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.822 [2024-11-26 20:52:44.488625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.822 [2024-11-26 20:52:44.488638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.822 [2024-11-26 20:52:44.488650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.822 [2024-11-26 20:52:44.489342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 [2024-11-26 20:52:44.648003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 [2024-11-26 20:52:44.664265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 NULL1 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.822 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:53.822 [2024-11-26 20:52:44.711433] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:11:53.822 [2024-11-26 20:52:44.711476] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936755 ] 00:11:54.388 Attached to nqn.2016-06.io.spdk:cnode1 00:11:54.388 Namespace ID: 1 size: 1GB 00:11:54.388 fused_ordering(0) 00:11:54.388 fused_ordering(1) 00:11:54.388 fused_ordering(2) 00:11:54.388 fused_ordering(3) 00:11:54.388 fused_ordering(4) 00:11:54.388 fused_ordering(5) 00:11:54.388 fused_ordering(6) 00:11:54.388 fused_ordering(7) 00:11:54.388 fused_ordering(8) 00:11:54.388 fused_ordering(9) 00:11:54.388 fused_ordering(10) 00:11:54.388 fused_ordering(11) 00:11:54.388 fused_ordering(12) 00:11:54.388 fused_ordering(13) 00:11:54.388 fused_ordering(14) 00:11:54.388 fused_ordering(15) 00:11:54.388 fused_ordering(16) 00:11:54.388 fused_ordering(17) 00:11:54.388 fused_ordering(18) 00:11:54.388 fused_ordering(19) 00:11:54.388 fused_ordering(20) 00:11:54.388 fused_ordering(21) 00:11:54.388 fused_ordering(22) 00:11:54.388 fused_ordering(23) 00:11:54.388 fused_ordering(24) 00:11:54.388 fused_ordering(25) 00:11:54.388 fused_ordering(26) 00:11:54.388 fused_ordering(27) 00:11:54.388 fused_ordering(28) 00:11:54.388 fused_ordering(29) 00:11:54.388 fused_ordering(30) 00:11:54.388 fused_ordering(31) 00:11:54.388 fused_ordering(32) 00:11:54.388 fused_ordering(33) 00:11:54.388 fused_ordering(34) 00:11:54.388 fused_ordering(35) 00:11:54.388 fused_ordering(36) 00:11:54.388 fused_ordering(37) 00:11:54.388 fused_ordering(38) 00:11:54.388 fused_ordering(39) 00:11:54.388 fused_ordering(40) 00:11:54.388 fused_ordering(41) 00:11:54.388 fused_ordering(42) 00:11:54.388 fused_ordering(43) 00:11:54.388 fused_ordering(44) 00:11:54.388 fused_ordering(45) 00:11:54.388 fused_ordering(46) 00:11:54.388 fused_ordering(47) 00:11:54.388 fused_ordering(48) 00:11:54.388 fused_ordering(49) 00:11:54.388 fused_ordering(50) 00:11:54.388 fused_ordering(51) 00:11:54.388 fused_ordering(52) 00:11:54.388 fused_ordering(53) 00:11:54.388 fused_ordering(54) 00:11:54.388 fused_ordering(55) 00:11:54.388 fused_ordering(56) 00:11:54.388 fused_ordering(57) 00:11:54.388 fused_ordering(58) 00:11:54.388 fused_ordering(59) 00:11:54.388 fused_ordering(60) 00:11:54.388 fused_ordering(61) 00:11:54.388 fused_ordering(62) 00:11:54.388 fused_ordering(63) 00:11:54.388 fused_ordering(64) 00:11:54.388 fused_ordering(65) 00:11:54.388 fused_ordering(66) 00:11:54.388 fused_ordering(67) 00:11:54.388 fused_ordering(68) 00:11:54.388 fused_ordering(69) 00:11:54.388 fused_ordering(70) 00:11:54.388 fused_ordering(71) 00:11:54.388 fused_ordering(72) 00:11:54.388 fused_ordering(73) 00:11:54.388 fused_ordering(74) 00:11:54.388 fused_ordering(75) 00:11:54.388 fused_ordering(76) 00:11:54.388 fused_ordering(77) 00:11:54.388 fused_ordering(78) 00:11:54.388 fused_ordering(79) 00:11:54.388 fused_ordering(80) 00:11:54.388 fused_ordering(81) 00:11:54.388 fused_ordering(82) 00:11:54.388 fused_ordering(83) 00:11:54.388 fused_ordering(84) 00:11:54.388 fused_ordering(85) 00:11:54.388 fused_ordering(86) 00:11:54.388 fused_ordering(87) 00:11:54.388 fused_ordering(88) 00:11:54.388 fused_ordering(89) 00:11:54.388 fused_ordering(90) 00:11:54.388 fused_ordering(91) 00:11:54.388 fused_ordering(92) 00:11:54.388 fused_ordering(93) 00:11:54.388 fused_ordering(94) 00:11:54.388 fused_ordering(95) 00:11:54.388 fused_ordering(96) 00:11:54.388 fused_ordering(97) 00:11:54.388 fused_ordering(98) 00:11:54.388 fused_ordering(99) 00:11:54.388 fused_ordering(100) 00:11:54.388 fused_ordering(101) 00:11:54.388 fused_ordering(102) 00:11:54.388 fused_ordering(103) 00:11:54.388 fused_ordering(104) 00:11:54.388 fused_ordering(105) 00:11:54.388 fused_ordering(106) 00:11:54.388 fused_ordering(107) 00:11:54.388 fused_ordering(108) 00:11:54.388 fused_ordering(109) 00:11:54.388 fused_ordering(110) 00:11:54.388 fused_ordering(111) 00:11:54.388 fused_ordering(112) 00:11:54.388 fused_ordering(113) 00:11:54.388 fused_ordering(114) 00:11:54.388 fused_ordering(115) 00:11:54.388 fused_ordering(116) 00:11:54.388 fused_ordering(117) 00:11:54.388 fused_ordering(118) 00:11:54.388 fused_ordering(119) 00:11:54.388 fused_ordering(120) 00:11:54.388 fused_ordering(121) 00:11:54.388 fused_ordering(122) 00:11:54.388 fused_ordering(123) 00:11:54.388 fused_ordering(124) 00:11:54.388 fused_ordering(125) 00:11:54.388 fused_ordering(126) 00:11:54.388 fused_ordering(127) 00:11:54.388 fused_ordering(128) 00:11:54.388 fused_ordering(129) 00:11:54.388 fused_ordering(130) 00:11:54.388 fused_ordering(131) 00:11:54.388 fused_ordering(132) 00:11:54.388 fused_ordering(133) 00:11:54.388 fused_ordering(134) 00:11:54.388 fused_ordering(135) 00:11:54.388 fused_ordering(136) 00:11:54.388 fused_ordering(137) 00:11:54.388 fused_ordering(138) 00:11:54.388 fused_ordering(139) 00:11:54.388 fused_ordering(140) 00:11:54.388 fused_ordering(141) 00:11:54.388 fused_ordering(142) 00:11:54.388 fused_ordering(143) 00:11:54.388 fused_ordering(144) 00:11:54.388 fused_ordering(145) 00:11:54.389 fused_ordering(146) 00:11:54.389 fused_ordering(147) 00:11:54.389 fused_ordering(148) 00:11:54.389 fused_ordering(149) 00:11:54.389 fused_ordering(150) 00:11:54.389 fused_ordering(151) 00:11:54.389 fused_ordering(152) 00:11:54.389 fused_ordering(153) 00:11:54.389 fused_ordering(154) 00:11:54.389 fused_ordering(155) 00:11:54.389 fused_ordering(156) 00:11:54.389 fused_ordering(157) 00:11:54.389 fused_ordering(158) 00:11:54.389 fused_ordering(159) 00:11:54.389 fused_ordering(160) 00:11:54.389 fused_ordering(161) 00:11:54.389 fused_ordering(162) 00:11:54.389 fused_ordering(163) 00:11:54.389 fused_ordering(164) 00:11:54.389 fused_ordering(165) 00:11:54.389 fused_ordering(166) 00:11:54.389 fused_ordering(167) 00:11:54.389 fused_ordering(168) 00:11:54.389 fused_ordering(169) 00:11:54.389 fused_ordering(170) 00:11:54.389 fused_ordering(171) 00:11:54.389 fused_ordering(172) 00:11:54.389 fused_ordering(173) 00:11:54.389 fused_ordering(174) 00:11:54.389 fused_ordering(175) 00:11:54.389 fused_ordering(176) 00:11:54.389 fused_ordering(177) 00:11:54.389 fused_ordering(178) 00:11:54.389 fused_ordering(179) 00:11:54.389 fused_ordering(180) 00:11:54.389 fused_ordering(181) 00:11:54.389 fused_ordering(182) 00:11:54.389 fused_ordering(183) 00:11:54.389 fused_ordering(184) 00:11:54.389 fused_ordering(185) 00:11:54.389 fused_ordering(186) 00:11:54.389 fused_ordering(187) 00:11:54.389 fused_ordering(188) 00:11:54.389 fused_ordering(189) 00:11:54.389 fused_ordering(190) 00:11:54.389 fused_ordering(191) 00:11:54.389 fused_ordering(192) 00:11:54.389 fused_ordering(193) 00:11:54.389 fused_ordering(194) 00:11:54.389 fused_ordering(195) 00:11:54.389 fused_ordering(196) 00:11:54.389 fused_ordering(197) 00:11:54.389 fused_ordering(198) 00:11:54.389 fused_ordering(199) 00:11:54.389 fused_ordering(200) 00:11:54.389 fused_ordering(201) 00:11:54.389 fused_ordering(202) 00:11:54.389 fused_ordering(203) 00:11:54.389 fused_ordering(204) 00:11:54.389 fused_ordering(205) 00:11:54.647 fused_ordering(206) 00:11:54.647 fused_ordering(207) 00:11:54.647 fused_ordering(208) 00:11:54.647 fused_ordering(209) 00:11:54.647 fused_ordering(210) 00:11:54.647 fused_ordering(211) 00:11:54.647 fused_ordering(212) 00:11:54.647 fused_ordering(213) 00:11:54.647 fused_ordering(214) 00:11:54.647 fused_ordering(215) 00:11:54.647 fused_ordering(216) 00:11:54.647 fused_ordering(217) 00:11:54.647 fused_ordering(218) 00:11:54.647 fused_ordering(219) 00:11:54.647 fused_ordering(220) 00:11:54.647 fused_ordering(221) 00:11:54.647 fused_ordering(222) 00:11:54.647 fused_ordering(223) 00:11:54.647 fused_ordering(224) 00:11:54.647 fused_ordering(225) 00:11:54.647 fused_ordering(226) 00:11:54.647 fused_ordering(227) 00:11:54.647 fused_ordering(228) 00:11:54.647 fused_ordering(229) 00:11:54.647 fused_ordering(230) 00:11:54.647 fused_ordering(231) 00:11:54.647 fused_ordering(232) 00:11:54.647 fused_ordering(233) 00:11:54.647 fused_ordering(234) 00:11:54.647 fused_ordering(235) 00:11:54.647 fused_ordering(236) 00:11:54.647 fused_ordering(237) 00:11:54.647 fused_ordering(238) 00:11:54.647 fused_ordering(239) 00:11:54.647 fused_ordering(240) 00:11:54.647 fused_ordering(241) 00:11:54.647 fused_ordering(242) 00:11:54.647 fused_ordering(243) 00:11:54.647 fused_ordering(244) 00:11:54.647 fused_ordering(245) 00:11:54.647 fused_ordering(246) 00:11:54.647 fused_ordering(247) 00:11:54.647 fused_ordering(248) 00:11:54.647 fused_ordering(249) 00:11:54.647 fused_ordering(250) 00:11:54.647 fused_ordering(251) 00:11:54.647 fused_ordering(252) 00:11:54.647 fused_ordering(253) 00:11:54.647 fused_ordering(254) 00:11:54.647 fused_ordering(255) 00:11:54.647 fused_ordering(256) 00:11:54.647 fused_ordering(257) 00:11:54.647 fused_ordering(258) 00:11:54.647 fused_ordering(259) 00:11:54.647 fused_ordering(260) 00:11:54.647 fused_ordering(261) 00:11:54.647 fused_ordering(262) 00:11:54.647 fused_ordering(263) 00:11:54.647 fused_ordering(264) 00:11:54.647 fused_ordering(265) 00:11:54.647 fused_ordering(266) 00:11:54.647 fused_ordering(267) 00:11:54.647 fused_ordering(268) 00:11:54.647 fused_ordering(269) 00:11:54.647 fused_ordering(270) 00:11:54.647 fused_ordering(271) 00:11:54.647 fused_ordering(272) 00:11:54.647 fused_ordering(273) 00:11:54.647 fused_ordering(274) 00:11:54.647 fused_ordering(275) 00:11:54.647 fused_ordering(276) 00:11:54.647 fused_ordering(277) 00:11:54.647 fused_ordering(278) 00:11:54.647 fused_ordering(279) 00:11:54.647 fused_ordering(280) 00:11:54.647 fused_ordering(281) 00:11:54.647 fused_ordering(282) 00:11:54.647 fused_ordering(283) 00:11:54.647 fused_ordering(284) 00:11:54.647 fused_ordering(285) 00:11:54.647 fused_ordering(286) 00:11:54.647 fused_ordering(287) 00:11:54.647 fused_ordering(288) 00:11:54.647 fused_ordering(289) 00:11:54.647 fused_ordering(290) 00:11:54.647 fused_ordering(291) 00:11:54.647 fused_ordering(292) 00:11:54.647 fused_ordering(293) 00:11:54.647 fused_ordering(294) 00:11:54.647 fused_ordering(295) 00:11:54.647 fused_ordering(296) 00:11:54.647 fused_ordering(297) 00:11:54.647 fused_ordering(298) 00:11:54.647 fused_ordering(299) 00:11:54.647 fused_ordering(300) 00:11:54.647 fused_ordering(301) 00:11:54.647 fused_ordering(302) 00:11:54.648 fused_ordering(303) 00:11:54.648 fused_ordering(304) 00:11:54.648 fused_ordering(305) 00:11:54.648 fused_ordering(306) 00:11:54.648 fused_ordering(307) 00:11:54.648 fused_ordering(308) 00:11:54.648 fused_ordering(309) 00:11:54.648 fused_ordering(310) 00:11:54.648 fused_ordering(311) 00:11:54.648 fused_ordering(312) 00:11:54.648 fused_ordering(313) 00:11:54.648 fused_ordering(314) 00:11:54.648 fused_ordering(315) 00:11:54.648 fused_ordering(316) 00:11:54.648 fused_ordering(317) 00:11:54.648 fused_ordering(318) 00:11:54.648 fused_ordering(319) 00:11:54.648 fused_ordering(320) 00:11:54.648 fused_ordering(321) 00:11:54.648 fused_ordering(322) 00:11:54.648 fused_ordering(323) 00:11:54.648 fused_ordering(324) 00:11:54.648 fused_ordering(325) 00:11:54.648 fused_ordering(326) 00:11:54.648 fused_ordering(327) 00:11:54.648 fused_ordering(328) 00:11:54.648 fused_ordering(329) 00:11:54.648 fused_ordering(330) 00:11:54.648 fused_ordering(331) 00:11:54.648 fused_ordering(332) 00:11:54.648 fused_ordering(333) 00:11:54.648 fused_ordering(334) 00:11:54.648 fused_ordering(335) 00:11:54.648 fused_ordering(336) 00:11:54.648 fused_ordering(337) 00:11:54.648 fused_ordering(338) 00:11:54.648 fused_ordering(339) 00:11:54.648 fused_ordering(340) 00:11:54.648 fused_ordering(341) 00:11:54.648 fused_ordering(342) 00:11:54.648 fused_ordering(343) 00:11:54.648 fused_ordering(344) 00:11:54.648 fused_ordering(345) 00:11:54.648 fused_ordering(346) 00:11:54.648 fused_ordering(347) 00:11:54.648 fused_ordering(348) 00:11:54.648 fused_ordering(349) 00:11:54.648 fused_ordering(350) 00:11:54.648 fused_ordering(351) 00:11:54.648 fused_ordering(352) 00:11:54.648 fused_ordering(353) 00:11:54.648 fused_ordering(354) 00:11:54.648 fused_ordering(355) 00:11:54.648 fused_ordering(356) 00:11:54.648 fused_ordering(357) 00:11:54.648 fused_ordering(358) 00:11:54.648 fused_ordering(359) 00:11:54.648 fused_ordering(360) 00:11:54.648 fused_ordering(361) 00:11:54.648 fused_ordering(362) 00:11:54.648 fused_ordering(363) 00:11:54.648 fused_ordering(364) 00:11:54.648 fused_ordering(365) 00:11:54.648 fused_ordering(366) 00:11:54.648 fused_ordering(367) 00:11:54.648 fused_ordering(368) 00:11:54.648 fused_ordering(369) 00:11:54.648 fused_ordering(370) 00:11:54.648 fused_ordering(371) 00:11:54.648 fused_ordering(372) 00:11:54.648 fused_ordering(373) 00:11:54.648 fused_ordering(374) 00:11:54.648 fused_ordering(375) 00:11:54.648 fused_ordering(376) 00:11:54.648 fused_ordering(377) 00:11:54.648 fused_ordering(378) 00:11:54.648 fused_ordering(379) 00:11:54.648 fused_ordering(380) 00:11:54.648 fused_ordering(381) 00:11:54.648 fused_ordering(382) 00:11:54.648 fused_ordering(383) 00:11:54.648 fused_ordering(384) 00:11:54.648 fused_ordering(385) 00:11:54.648 fused_ordering(386) 00:11:54.648 fused_ordering(387) 00:11:54.648 fused_ordering(388) 00:11:54.648 fused_ordering(389) 00:11:54.648 fused_ordering(390) 00:11:54.648 fused_ordering(391) 00:11:54.648 fused_ordering(392) 00:11:54.648 fused_ordering(393) 00:11:54.648 fused_ordering(394) 00:11:54.648 fused_ordering(395) 00:11:54.648 fused_ordering(396) 00:11:54.648 fused_ordering(397) 00:11:54.648 fused_ordering(398) 00:11:54.648 fused_ordering(399) 00:11:54.648 fused_ordering(400) 00:11:54.648 fused_ordering(401) 00:11:54.648 fused_ordering(402) 00:11:54.648 fused_ordering(403) 00:11:54.648 fused_ordering(404) 00:11:54.648 fused_ordering(405) 00:11:54.648 fused_ordering(406) 00:11:54.648 fused_ordering(407) 00:11:54.648 fused_ordering(408) 00:11:54.648 fused_ordering(409) 00:11:54.648 fused_ordering(410) 00:11:55.214 fused_ordering(411) 00:11:55.214 fused_ordering(412) 00:11:55.214 fused_ordering(413) 00:11:55.214 fused_ordering(414) 00:11:55.214 fused_ordering(415) 00:11:55.214 fused_ordering(416) 00:11:55.214 fused_ordering(417) 00:11:55.214 fused_ordering(418) 00:11:55.214 fused_ordering(419) 00:11:55.214 fused_ordering(420) 00:11:55.214 fused_ordering(421) 00:11:55.214 fused_ordering(422) 00:11:55.214 fused_ordering(423) 00:11:55.214 fused_ordering(424) 00:11:55.214 fused_ordering(425) 00:11:55.214 fused_ordering(426) 00:11:55.214 fused_ordering(427) 00:11:55.214 fused_ordering(428) 00:11:55.214 fused_ordering(429) 00:11:55.214 fused_ordering(430) 00:11:55.214 fused_ordering(431) 00:11:55.214 fused_ordering(432) 00:11:55.214 fused_ordering(433) 00:11:55.214 fused_ordering(434) 00:11:55.214 fused_ordering(435) 00:11:55.214 fused_ordering(436) 00:11:55.214 fused_ordering(437) 00:11:55.214 fused_ordering(438) 00:11:55.214 fused_ordering(439) 00:11:55.214 fused_ordering(440) 00:11:55.214 fused_ordering(441) 00:11:55.214 fused_ordering(442) 00:11:55.214 fused_ordering(443) 00:11:55.214 fused_ordering(444) 00:11:55.214 fused_ordering(445) 00:11:55.214 fused_ordering(446) 00:11:55.214 fused_ordering(447) 00:11:55.214 fused_ordering(448) 00:11:55.214 fused_ordering(449) 00:11:55.214 fused_ordering(450) 00:11:55.214 fused_ordering(451) 00:11:55.214 fused_ordering(452) 00:11:55.214 fused_ordering(453) 00:11:55.214 fused_ordering(454) 00:11:55.214 fused_ordering(455) 00:11:55.214 fused_ordering(456) 00:11:55.214 fused_ordering(457) 00:11:55.214 fused_ordering(458) 00:11:55.214 fused_ordering(459) 00:11:55.214 fused_ordering(460) 00:11:55.214 fused_ordering(461) 00:11:55.214 fused_ordering(462) 00:11:55.214 fused_ordering(463) 00:11:55.214 fused_ordering(464) 00:11:55.214 fused_ordering(465) 00:11:55.214 fused_ordering(466) 00:11:55.214 fused_ordering(467) 00:11:55.214 fused_ordering(468) 00:11:55.214 fused_ordering(469) 00:11:55.214 fused_ordering(470) 00:11:55.214 fused_ordering(471) 00:11:55.214 fused_ordering(472) 00:11:55.214 fused_ordering(473) 00:11:55.214 fused_ordering(474) 00:11:55.214 fused_ordering(475) 00:11:55.214 fused_ordering(476) 00:11:55.214 fused_ordering(477) 00:11:55.214 fused_ordering(478) 00:11:55.214 fused_ordering(479) 00:11:55.214 fused_ordering(480) 00:11:55.214 fused_ordering(481) 00:11:55.214 fused_ordering(482) 00:11:55.214 fused_ordering(483) 00:11:55.214 fused_ordering(484) 00:11:55.214 fused_ordering(485) 00:11:55.214 fused_ordering(486) 00:11:55.214 fused_ordering(487) 00:11:55.214 fused_ordering(488) 00:11:55.214 fused_ordering(489) 00:11:55.214 fused_ordering(490) 00:11:55.214 fused_ordering(491) 00:11:55.214 fused_ordering(492) 00:11:55.214 fused_ordering(493) 00:11:55.214 fused_ordering(494) 00:11:55.214 fused_ordering(495) 00:11:55.214 fused_ordering(496) 00:11:55.214 fused_ordering(497) 00:11:55.214 fused_ordering(498) 00:11:55.214 fused_ordering(499) 00:11:55.214 fused_ordering(500) 00:11:55.214 fused_ordering(501) 00:11:55.214 fused_ordering(502) 00:11:55.214 fused_ordering(503) 00:11:55.214 fused_ordering(504) 00:11:55.214 fused_ordering(505) 00:11:55.214 fused_ordering(506) 00:11:55.214 fused_ordering(507) 00:11:55.214 fused_ordering(508) 00:11:55.214 fused_ordering(509) 00:11:55.214 fused_ordering(510) 00:11:55.214 fused_ordering(511) 00:11:55.214 fused_ordering(512) 00:11:55.214 fused_ordering(513) 00:11:55.215 fused_ordering(514) 00:11:55.215 fused_ordering(515) 00:11:55.215 fused_ordering(516) 00:11:55.215 fused_ordering(517) 00:11:55.215 fused_ordering(518) 00:11:55.215 fused_ordering(519) 00:11:55.215 fused_ordering(520) 00:11:55.215 fused_ordering(521) 00:11:55.215 fused_ordering(522) 00:11:55.215 fused_ordering(523) 00:11:55.215 fused_ordering(524) 00:11:55.215 fused_ordering(525) 00:11:55.215 fused_ordering(526) 00:11:55.215 fused_ordering(527) 00:11:55.215 fused_ordering(528) 00:11:55.215 fused_ordering(529) 00:11:55.215 fused_ordering(530) 00:11:55.215 fused_ordering(531) 00:11:55.215 fused_ordering(532) 00:11:55.215 fused_ordering(533) 00:11:55.215 fused_ordering(534) 00:11:55.215 fused_ordering(535) 00:11:55.215 fused_ordering(536) 00:11:55.215 fused_ordering(537) 00:11:55.215 fused_ordering(538) 00:11:55.215 fused_ordering(539) 00:11:55.215 fused_ordering(540) 00:11:55.215 fused_ordering(541) 00:11:55.215 fused_ordering(542) 00:11:55.215 fused_ordering(543) 00:11:55.215 fused_ordering(544) 00:11:55.215 fused_ordering(545) 00:11:55.215 fused_ordering(546) 00:11:55.215 fused_ordering(547) 00:11:55.215 fused_ordering(548) 00:11:55.215 fused_ordering(549) 00:11:55.215 fused_ordering(550) 00:11:55.215 fused_ordering(551) 00:11:55.215 fused_ordering(552) 00:11:55.215 fused_ordering(553) 00:11:55.215 fused_ordering(554) 00:11:55.215 fused_ordering(555) 00:11:55.215 fused_ordering(556) 00:11:55.215 fused_ordering(557) 00:11:55.215 fused_ordering(558) 00:11:55.215 fused_ordering(559) 00:11:55.215 fused_ordering(560) 00:11:55.215 fused_ordering(561) 00:11:55.215 fused_ordering(562) 00:11:55.215 fused_ordering(563) 00:11:55.215 fused_ordering(564) 00:11:55.215 fused_ordering(565) 00:11:55.215 fused_ordering(566) 00:11:55.215 fused_ordering(567) 00:11:55.215 fused_ordering(568) 00:11:55.215 fused_ordering(569) 00:11:55.215 fused_ordering(570) 00:11:55.215 fused_ordering(571) 00:11:55.215 fused_ordering(572) 00:11:55.215 fused_ordering(573) 00:11:55.215 fused_ordering(574) 00:11:55.215 fused_ordering(575) 00:11:55.215 fused_ordering(576) 00:11:55.215 fused_ordering(577) 00:11:55.215 fused_ordering(578) 00:11:55.215 fused_ordering(579) 00:11:55.215 fused_ordering(580) 00:11:55.215 fused_ordering(581) 00:11:55.215 fused_ordering(582) 00:11:55.215 fused_ordering(583) 00:11:55.215 fused_ordering(584) 00:11:55.215 fused_ordering(585) 00:11:55.215 fused_ordering(586) 00:11:55.215 fused_ordering(587) 00:11:55.215 fused_ordering(588) 00:11:55.215 fused_ordering(589) 00:11:55.215 fused_ordering(590) 00:11:55.215 fused_ordering(591) 00:11:55.215 fused_ordering(592) 00:11:55.215 fused_ordering(593) 00:11:55.215 fused_ordering(594) 00:11:55.215 fused_ordering(595) 00:11:55.215 fused_ordering(596) 00:11:55.215 fused_ordering(597) 00:11:55.215 fused_ordering(598) 00:11:55.215 fused_ordering(599) 00:11:55.215 fused_ordering(600) 00:11:55.215 fused_ordering(601) 00:11:55.215 fused_ordering(602) 00:11:55.215 fused_ordering(603) 00:11:55.215 fused_ordering(604) 00:11:55.215 fused_ordering(605) 00:11:55.215 fused_ordering(606) 00:11:55.215 fused_ordering(607) 00:11:55.215 fused_ordering(608) 00:11:55.215 fused_ordering(609) 00:11:55.215 fused_ordering(610) 00:11:55.215 fused_ordering(611) 00:11:55.215 fused_ordering(612) 00:11:55.215 fused_ordering(613) 00:11:55.215 fused_ordering(614) 00:11:55.215 fused_ordering(615) 00:11:55.780 fused_ordering(616) 00:11:55.780 fused_ordering(617) 00:11:55.780 fused_ordering(618) 00:11:55.780 fused_ordering(619) 00:11:55.780 fused_ordering(620) 00:11:55.780 fused_ordering(621) 00:11:55.780 fused_ordering(622) 00:11:55.780 fused_ordering(623) 00:11:55.780 fused_ordering(624) 00:11:55.780 fused_ordering(625) 00:11:55.780 fused_ordering(626) 00:11:55.780 fused_ordering(627) 00:11:55.780 fused_ordering(628) 00:11:55.780 fused_ordering(629) 00:11:55.780 fused_ordering(630) 00:11:55.780 fused_ordering(631) 00:11:55.780 fused_ordering(632) 00:11:55.780 fused_ordering(633) 00:11:55.780 fused_ordering(634) 00:11:55.780 fused_ordering(635) 00:11:55.780 fused_ordering(636) 00:11:55.780 fused_ordering(637) 00:11:55.780 fused_ordering(638) 00:11:55.780 fused_ordering(639) 00:11:55.780 fused_ordering(640) 00:11:55.780 fused_ordering(641) 00:11:55.780 fused_ordering(642) 00:11:55.780 fused_ordering(643) 00:11:55.780 fused_ordering(644) 00:11:55.780 fused_ordering(645) 00:11:55.780 fused_ordering(646) 00:11:55.780 fused_ordering(647) 00:11:55.780 fused_ordering(648) 00:11:55.780 fused_ordering(649) 00:11:55.780 fused_ordering(650) 00:11:55.780 fused_ordering(651) 00:11:55.780 fused_ordering(652) 00:11:55.780 fused_ordering(653) 00:11:55.780 fused_ordering(654) 00:11:55.780 fused_ordering(655) 00:11:55.780 fused_ordering(656) 00:11:55.780 fused_ordering(657) 00:11:55.780 fused_ordering(658) 00:11:55.780 fused_ordering(659) 00:11:55.780 fused_ordering(660) 00:11:55.780 fused_ordering(661) 00:11:55.780 fused_ordering(662) 00:11:55.780 fused_ordering(663) 00:11:55.780 fused_ordering(664) 00:11:55.780 fused_ordering(665) 00:11:55.780 fused_ordering(666) 00:11:55.780 fused_ordering(667) 00:11:55.780 fused_ordering(668) 00:11:55.780 fused_ordering(669) 00:11:55.780 fused_ordering(670) 00:11:55.780 fused_ordering(671) 00:11:55.780 fused_ordering(672) 00:11:55.780 fused_ordering(673) 00:11:55.780 fused_ordering(674) 00:11:55.780 fused_ordering(675) 00:11:55.780 fused_ordering(676) 00:11:55.780 fused_ordering(677) 00:11:55.780 fused_ordering(678) 00:11:55.780 fused_ordering(679) 00:11:55.780 fused_ordering(680) 00:11:55.780 fused_ordering(681) 00:11:55.780 fused_ordering(682) 00:11:55.780 fused_ordering(683) 00:11:55.780 fused_ordering(684) 00:11:55.780 fused_ordering(685) 00:11:55.780 fused_ordering(686) 00:11:55.780 fused_ordering(687) 00:11:55.780 fused_ordering(688) 00:11:55.780 fused_ordering(689) 00:11:55.780 fused_ordering(690) 00:11:55.780 fused_ordering(691) 00:11:55.780 fused_ordering(692) 00:11:55.780 fused_ordering(693) 00:11:55.780 fused_ordering(694) 00:11:55.780 fused_ordering(695) 00:11:55.780 fused_ordering(696) 00:11:55.780 fused_ordering(697) 00:11:55.780 fused_ordering(698) 00:11:55.780 fused_ordering(699) 00:11:55.780 fused_ordering(700) 00:11:55.780 fused_ordering(701) 00:11:55.780 fused_ordering(702) 00:11:55.780 fused_ordering(703) 00:11:55.780 fused_ordering(704) 00:11:55.780 fused_ordering(705) 00:11:55.780 fused_ordering(706) 00:11:55.780 fused_ordering(707) 00:11:55.780 fused_ordering(708) 00:11:55.780 fused_ordering(709) 00:11:55.780 fused_ordering(710) 00:11:55.780 fused_ordering(711) 00:11:55.780 fused_ordering(712) 00:11:55.780 fused_ordering(713) 00:11:55.780 fused_ordering(714) 00:11:55.780 fused_ordering(715) 00:11:55.781 fused_ordering(716) 00:11:55.781 fused_ordering(717) 00:11:55.781 fused_ordering(718) 00:11:55.781 fused_ordering(719) 00:11:55.781 fused_ordering(720) 00:11:55.781 fused_ordering(721) 00:11:55.781 fused_ordering(722) 00:11:55.781 fused_ordering(723) 00:11:55.781 fused_ordering(724) 00:11:55.781 fused_ordering(725) 00:11:55.781 fused_ordering(726) 00:11:55.781 fused_ordering(727) 00:11:55.781 fused_ordering(728) 00:11:55.781 fused_ordering(729) 00:11:55.781 fused_ordering(730) 00:11:55.781 fused_ordering(731) 00:11:55.781 fused_ordering(732) 00:11:55.781 fused_ordering(733) 00:11:55.781 fused_ordering(734) 00:11:55.781 fused_ordering(735) 00:11:55.781 fused_ordering(736) 00:11:55.781 fused_ordering(737) 00:11:55.781 fused_ordering(738) 00:11:55.781 fused_ordering(739) 00:11:55.781 fused_ordering(740) 00:11:55.781 fused_ordering(741) 00:11:55.781 fused_ordering(742) 00:11:55.781 fused_ordering(743) 00:11:55.781 fused_ordering(744) 00:11:55.781 fused_ordering(745) 00:11:55.781 fused_ordering(746) 00:11:55.781 fused_ordering(747) 00:11:55.781 fused_ordering(748) 00:11:55.781 fused_ordering(749) 00:11:55.781 fused_ordering(750) 00:11:55.781 fused_ordering(751) 00:11:55.781 fused_ordering(752) 00:11:55.781 fused_ordering(753) 00:11:55.781 fused_ordering(754) 00:11:55.781 fused_ordering(755) 00:11:55.781 fused_ordering(756) 00:11:55.781 fused_ordering(757) 00:11:55.781 fused_ordering(758) 00:11:55.781 fused_ordering(759) 00:11:55.781 fused_ordering(760) 00:11:55.781 fused_ordering(761) 00:11:55.781 fused_ordering(762) 00:11:55.781 fused_ordering(763) 00:11:55.781 fused_ordering(764) 00:11:55.781 fused_ordering(765) 00:11:55.781 fused_ordering(766) 00:11:55.781 fused_ordering(767) 00:11:55.781 fused_ordering(768) 00:11:55.781 fused_ordering(769) 00:11:55.781 fused_ordering(770) 00:11:55.781 fused_ordering(771) 00:11:55.781 fused_ordering(772) 00:11:55.781 fused_ordering(773) 00:11:55.781 fused_ordering(774) 00:11:55.781 fused_ordering(775) 00:11:55.781 fused_ordering(776) 00:11:55.781 fused_ordering(777) 00:11:55.781 fused_ordering(778) 00:11:55.781 fused_ordering(779) 00:11:55.781 fused_ordering(780) 00:11:55.781 fused_ordering(781) 00:11:55.781 fused_ordering(782) 00:11:55.781 fused_ordering(783) 00:11:55.781 fused_ordering(784) 00:11:55.781 fused_ordering(785) 00:11:55.781 fused_ordering(786) 00:11:55.781 fused_ordering(787) 00:11:55.781 fused_ordering(788) 00:11:55.781 fused_ordering(789) 00:11:55.781 fused_ordering(790) 00:11:55.781 fused_ordering(791) 00:11:55.781 fused_ordering(792) 00:11:55.781 fused_ordering(793) 00:11:55.781 fused_ordering(794) 00:11:55.781 fused_ordering(795) 00:11:55.781 fused_ordering(796) 00:11:55.781 fused_ordering(797) 00:11:55.781 fused_ordering(798) 00:11:55.781 fused_ordering(799) 00:11:55.781 fused_ordering(800) 00:11:55.781 fused_ordering(801) 00:11:55.781 fused_ordering(802) 00:11:55.781 fused_ordering(803) 00:11:55.781 fused_ordering(804) 00:11:55.781 fused_ordering(805) 00:11:55.781 fused_ordering(806) 00:11:55.781 fused_ordering(807) 00:11:55.781 fused_ordering(808) 00:11:55.781 fused_ordering(809) 00:11:55.781 fused_ordering(810) 00:11:55.781 fused_ordering(811) 00:11:55.781 fused_ordering(812) 00:11:55.781 fused_ordering(813) 00:11:55.781 fused_ordering(814) 00:11:55.781 fused_ordering(815) 00:11:55.781 fused_ordering(816) 00:11:55.781 fused_ordering(817) 00:11:55.781 fused_ordering(818) 00:11:55.781 fused_ordering(819) 00:11:55.781 fused_ordering(820) 00:11:56.714 fused_ordering(821) 00:11:56.714 fused_ordering(822) 00:11:56.714 fused_ordering(823) 00:11:56.714 fused_ordering(824) 00:11:56.714 fused_ordering(825) 00:11:56.714 fused_ordering(826) 00:11:56.714 fused_ordering(827) 00:11:56.714 fused_ordering(828) 00:11:56.714 fused_ordering(829) 00:11:56.714 fused_ordering(830) 00:11:56.714 fused_ordering(831) 00:11:56.714 fused_ordering(832) 00:11:56.714 fused_ordering(833) 00:11:56.714 fused_ordering(834) 00:11:56.714 fused_ordering(835) 00:11:56.714 fused_ordering(836) 00:11:56.714 fused_ordering(837) 00:11:56.714 fused_ordering(838) 00:11:56.714 fused_ordering(839) 00:11:56.714 fused_ordering(840) 00:11:56.714 fused_ordering(841) 00:11:56.714 fused_ordering(842) 00:11:56.714 fused_ordering(843) 00:11:56.714 fused_ordering(844) 00:11:56.714 fused_ordering(845) 00:11:56.714 fused_ordering(846) 00:11:56.714 fused_ordering(847) 00:11:56.714 fused_ordering(848) 00:11:56.714 fused_ordering(849) 00:11:56.714 fused_ordering(850) 00:11:56.714 fused_ordering(851) 00:11:56.714 fused_ordering(852) 00:11:56.714 fused_ordering(853) 00:11:56.714 fused_ordering(854) 00:11:56.714 fused_ordering(855) 00:11:56.714 fused_ordering(856) 00:11:56.714 fused_ordering(857) 00:11:56.714 fused_ordering(858) 00:11:56.714 fused_ordering(859) 00:11:56.714 fused_ordering(860) 00:11:56.714 fused_ordering(861) 00:11:56.714 fused_ordering(862) 00:11:56.714 fused_ordering(863) 00:11:56.714 fused_ordering(864) 00:11:56.714 fused_ordering(865) 00:11:56.714 fused_ordering(866) 00:11:56.714 fused_ordering(867) 00:11:56.714 fused_ordering(868) 00:11:56.714 fused_ordering(869) 00:11:56.714 fused_ordering(870) 00:11:56.714 fused_ordering(871) 00:11:56.714 fused_ordering(872) 00:11:56.714 fused_ordering(873) 00:11:56.714 fused_ordering(874) 00:11:56.714 fused_ordering(875) 00:11:56.714 fused_ordering(876) 00:11:56.714 fused_ordering(877) 00:11:56.714 fused_ordering(878) 00:11:56.714 fused_ordering(879) 00:11:56.714 fused_ordering(880) 00:11:56.714 fused_ordering(881) 00:11:56.714 fused_ordering(882) 00:11:56.715 fused_ordering(883) 00:11:56.715 fused_ordering(884) 00:11:56.715 fused_ordering(885) 00:11:56.715 fused_ordering(886) 00:11:56.715 fused_ordering(887) 00:11:56.715 fused_ordering(888) 00:11:56.715 fused_ordering(889) 00:11:56.715 fused_ordering(890) 00:11:56.715 fused_ordering(891) 00:11:56.715 fused_ordering(892) 00:11:56.715 fused_ordering(893) 00:11:56.715 fused_ordering(894) 00:11:56.715 fused_ordering(895) 00:11:56.715 fused_ordering(896) 00:11:56.715 fused_ordering(897) 00:11:56.715 fused_ordering(898) 00:11:56.715 fused_ordering(899) 00:11:56.715 fused_ordering(900) 00:11:56.715 fused_ordering(901) 00:11:56.715 fused_ordering(902) 00:11:56.715 fused_ordering(903) 00:11:56.715 fused_ordering(904) 00:11:56.715 fused_ordering(905) 00:11:56.715 fused_ordering(906) 00:11:56.715 fused_ordering(907) 00:11:56.715 fused_ordering(908) 00:11:56.715 fused_ordering(909) 00:11:56.715 fused_ordering(910) 00:11:56.715 fused_ordering(911) 00:11:56.715 fused_ordering(912) 00:11:56.715 fused_ordering(913) 00:11:56.715 fused_ordering(914) 00:11:56.715 fused_ordering(915) 00:11:56.715 fused_ordering(916) 00:11:56.715 fused_ordering(917) 00:11:56.715 fused_ordering(918) 00:11:56.715 fused_ordering(919) 00:11:56.715 fused_ordering(920) 00:11:56.715 fused_ordering(921) 00:11:56.715 fused_ordering(922) 00:11:56.715 fused_ordering(923) 00:11:56.715 fused_ordering(924) 00:11:56.715 fused_ordering(925) 00:11:56.715 fused_ordering(926) 00:11:56.715 fused_ordering(927) 00:11:56.715 fused_ordering(928) 00:11:56.715 fused_ordering(929) 00:11:56.715 fused_ordering(930) 00:11:56.715 fused_ordering(931) 00:11:56.715 fused_ordering(932) 00:11:56.715 fused_ordering(933) 00:11:56.715 fused_ordering(934) 00:11:56.715 fused_ordering(935) 00:11:56.715 fused_ordering(936) 00:11:56.715 fused_ordering(937) 00:11:56.715 fused_ordering(938) 00:11:56.715 fused_ordering(939) 00:11:56.715 fused_ordering(940) 00:11:56.715 fused_ordering(941) 00:11:56.715 fused_ordering(942) 00:11:56.715 fused_ordering(943) 00:11:56.715 fused_ordering(944) 00:11:56.715 fused_ordering(945) 00:11:56.715 fused_ordering(946) 00:11:56.715 fused_ordering(947) 00:11:56.715 fused_ordering(948) 00:11:56.715 fused_ordering(949) 00:11:56.715 fused_ordering(950) 00:11:56.715 fused_ordering(951) 00:11:56.715 fused_ordering(952) 00:11:56.715 fused_ordering(953) 00:11:56.715 fused_ordering(954) 00:11:56.715 fused_ordering(955) 00:11:56.715 fused_ordering(956) 00:11:56.715 fused_ordering(957) 00:11:56.715 fused_ordering(958) 00:11:56.715 fused_ordering(959) 00:11:56.715 fused_ordering(960) 00:11:56.715 fused_ordering(961) 00:11:56.715 fused_ordering(962) 00:11:56.715 fused_ordering(963) 00:11:56.715 fused_ordering(964) 00:11:56.715 fused_ordering(965) 00:11:56.715 fused_ordering(966) 00:11:56.715 fused_ordering(967) 00:11:56.715 fused_ordering(968) 00:11:56.715 fused_ordering(969) 00:11:56.715 fused_ordering(970) 00:11:56.715 fused_ordering(971) 00:11:56.715 fused_ordering(972) 00:11:56.715 fused_ordering(973) 00:11:56.715 fused_ordering(974) 00:11:56.715 fused_ordering(975) 00:11:56.715 fused_ordering(976) 00:11:56.715 fused_ordering(977) 00:11:56.715 fused_ordering(978) 00:11:56.715 fused_ordering(979) 00:11:56.715 fused_ordering(980) 00:11:56.715 fused_ordering(981) 00:11:56.715 fused_ordering(982) 00:11:56.715 fused_ordering(983) 00:11:56.715 fused_ordering(984) 00:11:56.715 fused_ordering(985) 00:11:56.715 fused_ordering(986) 00:11:56.715 fused_ordering(987) 00:11:56.715 fused_ordering(988) 00:11:56.715 fused_ordering(989) 00:11:56.715 fused_ordering(990) 00:11:56.715 fused_ordering(991) 00:11:56.715 fused_ordering(992) 00:11:56.715 fused_ordering(993) 00:11:56.715 fused_ordering(994) 00:11:56.715 fused_ordering(995) 00:11:56.715 fused_ordering(996) 00:11:56.715 fused_ordering(997) 00:11:56.715 fused_ordering(998) 00:11:56.715 fused_ordering(999) 00:11:56.715 fused_ordering(1000) 00:11:56.715 fused_ordering(1001) 00:11:56.715 fused_ordering(1002) 00:11:56.715 fused_ordering(1003) 00:11:56.715 fused_ordering(1004) 00:11:56.715 fused_ordering(1005) 00:11:56.715 fused_ordering(1006) 00:11:56.715 fused_ordering(1007) 00:11:56.715 fused_ordering(1008) 00:11:56.715 fused_ordering(1009) 00:11:56.715 fused_ordering(1010) 00:11:56.715 fused_ordering(1011) 00:11:56.715 fused_ordering(1012) 00:11:56.715 fused_ordering(1013) 00:11:56.715 fused_ordering(1014) 00:11:56.715 fused_ordering(1015) 00:11:56.715 fused_ordering(1016) 00:11:56.715 fused_ordering(1017) 00:11:56.715 fused_ordering(1018) 00:11:56.715 fused_ordering(1019) 00:11:56.715 fused_ordering(1020) 00:11:56.715 fused_ordering(1021) 00:11:56.715 fused_ordering(1022) 00:11:56.715 fused_ordering(1023) 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.715 rmmod nvme_tcp 00:11:56.715 rmmod nvme_fabrics 00:11:56.715 rmmod nvme_keyring 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3936729 ']' 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3936729 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3936729 ']' 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3936729 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3936729 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3936729' 00:11:56.715 killing process with pid 3936729 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3936729 00:11:56.715 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3936729 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.974 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.877 00:11:58.877 real 0m7.752s 00:11:58.877 user 0m5.316s 00:11:58.877 sys 0m3.390s 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 ************************************ 00:11:58.877 END TEST nvmf_fused_ordering 00:11:58.877 ************************************ 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 ************************************ 00:11:58.877 START TEST nvmf_ns_masking 00:11:58.877 ************************************ 00:11:58.877 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:59.136 * Looking for test storage... 00:11:59.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.136 --rc genhtml_branch_coverage=1 00:11:59.136 --rc genhtml_function_coverage=1 00:11:59.136 --rc genhtml_legend=1 00:11:59.136 --rc geninfo_all_blocks=1 00:11:59.136 --rc geninfo_unexecuted_blocks=1 00:11:59.136 00:11:59.136 ' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.136 --rc genhtml_branch_coverage=1 00:11:59.136 --rc genhtml_function_coverage=1 00:11:59.136 --rc genhtml_legend=1 00:11:59.136 --rc geninfo_all_blocks=1 00:11:59.136 --rc geninfo_unexecuted_blocks=1 00:11:59.136 00:11:59.136 ' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.136 --rc genhtml_branch_coverage=1 00:11:59.136 --rc genhtml_function_coverage=1 00:11:59.136 --rc genhtml_legend=1 00:11:59.136 --rc geninfo_all_blocks=1 00:11:59.136 --rc geninfo_unexecuted_blocks=1 00:11:59.136 00:11:59.136 ' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.136 --rc genhtml_branch_coverage=1 00:11:59.136 --rc genhtml_function_coverage=1 00:11:59.136 --rc genhtml_legend=1 00:11:59.136 --rc geninfo_all_blocks=1 00:11:59.136 --rc geninfo_unexecuted_blocks=1 00:11:59.136 00:11:59.136 ' 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.136 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=04dbf201-6574-45ba-b9df-3a03cd9650d2 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=858f7caf-af63-49a7-a8cc-627c21dcd930 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d4ab93ad-da14-4775-9735-d543ad10a01c 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.137 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:01.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:01.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:01.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:01.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:01.669 00:12:01.669 --- 10.0.0.2 ping statistics --- 00:12:01.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.669 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:12:01.669 00:12:01.669 --- 10.0.0.1 ping statistics --- 00:12:01.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.669 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3939088 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3939088 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3939088 ']' 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.669 [2024-11-26 20:52:52.242775] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:01.669 [2024-11-26 20:52:52.242890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.669 [2024-11-26 20:52:52.315289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.669 [2024-11-26 20:52:52.372913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.669 [2024-11-26 20:52:52.372998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.669 [2024-11-26 20:52:52.373012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.669 [2024-11-26 20:52:52.373038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.669 [2024-11-26 20:52:52.373048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.669 [2024-11-26 20:52:52.373647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.669 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:01.926 [2024-11-26 20:52:52.813379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.926 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:01.926 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:01.926 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:02.491 Malloc1 00:12:02.491 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:02.748 Malloc2 00:12:02.748 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.005 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:03.262 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.519 [2024-11-26 20:52:54.303315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d4ab93ad-da14-4775-9735-d543ad10a01c -a 10.0.0.2 -s 4420 -i 4 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.519 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.046 [ 0]:0x1 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad022a34d0541ec9bf503995a1da1ea 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad022a34d0541ec9bf503995a1da1ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.046 [ 0]:0x1 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.046 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad022a34d0541ec9bf503995a1da1ea 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad022a34d0541ec9bf503995a1da1ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:06.305 [ 1]:0x2 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.305 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.305 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:06.305 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.305 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:06.305 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.305 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.563 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:06.821 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:06.821 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d4ab93ad-da14-4775-9735-d543ad10a01c -a 10.0.0.2 -s 4420 -i 4 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:07.079 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:08.990 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.248 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:09.248 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:09.248 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:09.248 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.249 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.249 [ 0]:0x2 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.249 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:09.507 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:09.507 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.507 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.507 [ 0]:0x1 00:12:09.507 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.507 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad022a34d0541ec9bf503995a1da1ea 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad022a34d0541ec9bf503995a1da1ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.765 [ 1]:0x2 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.765 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:10.024 [ 0]:0x2 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.024 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.282 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:10.283 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d4ab93ad-da14-4775-9735-d543ad10a01c -a 10.0.0.2 -s 4420 -i 4 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:10.540 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.072 [ 0]:0x1 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ad022a34d0541ec9bf503995a1da1ea 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ad022a34d0541ec9bf503995a1da1ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.072 [ 1]:0x2 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.072 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.330 [ 0]:0x2 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.330 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:13.588 [2024-11-26 20:53:04.370365] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:13.588 request: 00:12:13.588 { 00:12:13.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.588 "nsid": 2, 00:12:13.588 "host": "nqn.2016-06.io.spdk:host1", 00:12:13.588 "method": "nvmf_ns_remove_host", 00:12:13.588 "req_id": 1 00:12:13.588 } 00:12:13.589 Got JSON-RPC error response 00:12:13.589 response: 00:12:13.589 { 00:12:13.589 "code": -32602, 00:12:13.589 "message": "Invalid parameters" 00:12:13.589 } 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.589 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.848 [ 0]:0x2 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=302c107ec4e946ecb25335f0c0469b2f 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 302c107ec4e946ecb25335f0c0469b2f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3940707 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3940707 /var/tmp/host.sock 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3940707 ']' 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:13.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.848 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:13.848 [2024-11-26 20:53:04.747025] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:13.848 [2024-11-26 20:53:04.747104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940707 ] 00:12:14.107 [2024-11-26 20:53:04.818759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.107 [2024-11-26 20:53:04.883283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.365 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.365 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:14.365 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.624 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:14.882 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 04dbf201-6574-45ba-b9df-3a03cd9650d2 00:12:14.882 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:14.882 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 04DBF201657445BAB9DF3A03CD9650D2 -i 00:12:15.140 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 858f7caf-af63-49a7-a8cc-627c21dcd930 00:12:15.140 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:15.140 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 858F7CAFAF6349A7A8CC627C21DCD930 -i 00:12:15.398 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.655 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:16.221 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:16.221 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:16.479 nvme0n1 00:12:16.479 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:16.479 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:17.072 nvme1n2 00:12:17.072 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:17.072 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:17.072 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:17.072 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:17.072 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:17.367 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:17.367 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:17.368 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:17.368 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:17.625 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 04dbf201-6574-45ba-b9df-3a03cd9650d2 == \0\4\d\b\f\2\0\1\-\6\5\7\4\-\4\5\b\a\-\b\9\d\f\-\3\a\0\3\c\d\9\6\5\0\d\2 ]] 00:12:17.625 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:17.625 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:17.625 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:17.883 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 858f7caf-af63-49a7-a8cc-627c21dcd930 == \8\5\8\f\7\c\a\f\-\a\f\6\3\-\4\9\a\7\-\a\8\c\c\-\6\2\7\c\2\1\d\c\d\9\3\0 ]] 00:12:17.883 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.141 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 04dbf201-6574-45ba-b9df-3a03cd9650d2 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 04DBF201657445BAB9DF3A03CD9650D2 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 04DBF201657445BAB9DF3A03CD9650D2 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:18.399 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 04DBF201657445BAB9DF3A03CD9650D2 00:12:18.657 [2024-11-26 20:53:09.437123] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:18.657 [2024-11-26 20:53:09.437170] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:18.657 [2024-11-26 20:53:09.437187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.657 request: 00:12:18.657 { 00:12:18.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.657 "namespace": { 00:12:18.657 "bdev_name": "invalid", 00:12:18.657 "nsid": 1, 00:12:18.657 "nguid": "04DBF201657445BAB9DF3A03CD9650D2", 00:12:18.657 "no_auto_visible": false 00:12:18.657 }, 00:12:18.657 "method": "nvmf_subsystem_add_ns", 00:12:18.657 "req_id": 1 00:12:18.657 } 00:12:18.657 Got JSON-RPC error response 00:12:18.657 response: 00:12:18.657 { 00:12:18.657 "code": -32602, 00:12:18.657 "message": "Invalid parameters" 00:12:18.657 } 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 04dbf201-6574-45ba-b9df-3a03cd9650d2 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:18.657 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 04DBF201657445BAB9DF3A03CD9650D2 -i 00:12:18.914 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:21.444 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:21.444 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:21.444 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3940707 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3940707 ']' 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3940707 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3940707 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3940707' 00:12:21.444 killing process with pid 3940707 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3940707 00:12:21.444 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3940707 00:12:21.703 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.962 rmmod nvme_tcp 00:12:21.962 rmmod nvme_fabrics 00:12:21.962 rmmod nvme_keyring 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3939088 ']' 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3939088 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3939088 ']' 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3939088 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.962 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3939088 00:12:22.221 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.221 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.221 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3939088' 00:12:22.221 killing process with pid 3939088 00:12:22.221 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3939088 00:12:22.221 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3939088 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.480 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.382 00:12:24.382 real 0m25.492s 00:12:24.382 user 0m37.085s 00:12:24.382 sys 0m4.599s 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.382 ************************************ 00:12:24.382 END TEST nvmf_ns_masking 00:12:24.382 ************************************ 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.382 ************************************ 00:12:24.382 START TEST nvmf_nvme_cli 00:12:24.382 ************************************ 00:12:24.382 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:24.642 * Looking for test storage... 00:12:24.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.642 --rc genhtml_branch_coverage=1 00:12:24.642 --rc genhtml_function_coverage=1 00:12:24.642 --rc genhtml_legend=1 00:12:24.642 --rc geninfo_all_blocks=1 00:12:24.642 --rc geninfo_unexecuted_blocks=1 00:12:24.642 00:12:24.642 ' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.642 --rc genhtml_branch_coverage=1 00:12:24.642 --rc genhtml_function_coverage=1 00:12:24.642 --rc genhtml_legend=1 00:12:24.642 --rc geninfo_all_blocks=1 00:12:24.642 --rc geninfo_unexecuted_blocks=1 00:12:24.642 00:12:24.642 ' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.642 --rc genhtml_branch_coverage=1 00:12:24.642 --rc genhtml_function_coverage=1 00:12:24.642 --rc genhtml_legend=1 00:12:24.642 --rc geninfo_all_blocks=1 00:12:24.642 --rc geninfo_unexecuted_blocks=1 00:12:24.642 00:12:24.642 ' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.642 --rc genhtml_branch_coverage=1 00:12:24.642 --rc genhtml_function_coverage=1 00:12:24.642 --rc genhtml_legend=1 00:12:24.642 --rc geninfo_all_blocks=1 00:12:24.642 --rc geninfo_unexecuted_blocks=1 00:12:24.642 00:12:24.642 ' 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.642 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.643 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:27.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:27.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:27.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:27.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:12:27.176 00:12:27.176 --- 10.0.0.2 ping statistics --- 00:12:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.176 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:12:27.176 00:12:27.176 --- 10.0.0.1 ping statistics --- 00:12:27.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.176 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3943649 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3943649 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3943649 ']' 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.176 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.177 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.177 [2024-11-26 20:53:17.782864] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:27.177 [2024-11-26 20:53:17.782940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.177 [2024-11-26 20:53:17.862246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.177 [2024-11-26 20:53:17.928206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.177 [2024-11-26 20:53:17.928279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.177 [2024-11-26 20:53:17.928296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.177 [2024-11-26 20:53:17.928309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.177 [2024-11-26 20:53:17.928320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.177 [2024-11-26 20:53:17.930026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.177 [2024-11-26 20:53:17.930082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.177 [2024-11-26 20:53:17.930129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.177 [2024-11-26 20:53:17.930132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.177 [2024-11-26 20:53:18.086800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.177 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 Malloc0 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 Malloc1 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.435 [2024-11-26 20:53:18.190565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.435 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.436 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.436 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.436 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.436 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:27.694 00:12:27.694 Discovery Log Number of Records 2, Generation counter 2 00:12:27.694 =====Discovery Log Entry 0====== 00:12:27.694 trtype: tcp 00:12:27.694 adrfam: ipv4 00:12:27.694 subtype: current discovery subsystem 00:12:27.694 treq: not required 00:12:27.694 portid: 0 00:12:27.694 trsvcid: 4420 00:12:27.694 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:27.694 traddr: 10.0.0.2 00:12:27.694 eflags: explicit discovery connections, duplicate discovery information 00:12:27.694 sectype: none 00:12:27.694 =====Discovery Log Entry 1====== 00:12:27.694 trtype: tcp 00:12:27.694 adrfam: ipv4 00:12:27.694 subtype: nvme subsystem 00:12:27.694 treq: not required 00:12:27.694 portid: 0 00:12:27.694 trsvcid: 4420 00:12:27.694 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:27.694 traddr: 10.0.0.2 00:12:27.695 eflags: none 00:12:27.695 sectype: none 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:27.695 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:28.260 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:30.785 /dev/nvme0n2 ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.785 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.786 rmmod nvme_tcp 00:12:30.786 rmmod nvme_fabrics 00:12:30.786 rmmod nvme_keyring 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3943649 ']' 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3943649 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3943649 ']' 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3943649 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3943649 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3943649' 00:12:30.786 killing process with pid 3943649 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3943649 00:12:30.786 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3943649 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.046 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.582 00:12:33.582 real 0m8.656s 00:12:33.582 user 0m16.379s 00:12:33.582 sys 0m2.269s 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.582 ************************************ 00:12:33.582 END TEST nvmf_nvme_cli 00:12:33.582 ************************************ 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.582 20:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.582 ************************************ 00:12:33.582 START TEST nvmf_vfio_user 00:12:33.582 ************************************ 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:33.582 * Looking for test storage... 00:12:33.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.582 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.583 --rc genhtml_branch_coverage=1 00:12:33.583 --rc genhtml_function_coverage=1 00:12:33.583 --rc genhtml_legend=1 00:12:33.583 --rc geninfo_all_blocks=1 00:12:33.583 --rc geninfo_unexecuted_blocks=1 00:12:33.583 00:12:33.583 ' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.583 --rc genhtml_branch_coverage=1 00:12:33.583 --rc genhtml_function_coverage=1 00:12:33.583 --rc genhtml_legend=1 00:12:33.583 --rc geninfo_all_blocks=1 00:12:33.583 --rc geninfo_unexecuted_blocks=1 00:12:33.583 00:12:33.583 ' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.583 --rc genhtml_branch_coverage=1 00:12:33.583 --rc genhtml_function_coverage=1 00:12:33.583 --rc genhtml_legend=1 00:12:33.583 --rc geninfo_all_blocks=1 00:12:33.583 --rc geninfo_unexecuted_blocks=1 00:12:33.583 00:12:33.583 ' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.583 --rc genhtml_branch_coverage=1 00:12:33.583 --rc genhtml_function_coverage=1 00:12:33.583 --rc genhtml_legend=1 00:12:33.583 --rc geninfo_all_blocks=1 00:12:33.583 --rc geninfo_unexecuted_blocks=1 00:12:33.583 00:12:33.583 ' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3944577 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3944577' 00:12:33.583 Process pid: 3944577 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3944577 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3944577 ']' 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.583 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:33.584 [2024-11-26 20:53:24.241453] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:33.584 [2024-11-26 20:53:24.241540] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.584 [2024-11-26 20:53:24.315693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.584 [2024-11-26 20:53:24.379508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.584 [2024-11-26 20:53:24.379575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.584 [2024-11-26 20:53:24.379591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.584 [2024-11-26 20:53:24.379604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.584 [2024-11-26 20:53:24.379616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.584 [2024-11-26 20:53:24.381297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.584 [2024-11-26 20:53:24.381354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.584 [2024-11-26 20:53:24.381405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.584 [2024-11-26 20:53:24.381408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:33.584 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:34.956 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:35.214 Malloc1 00:12:35.214 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:35.471 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:35.728 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:36.294 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:36.294 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:36.294 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:36.294 Malloc2 00:12:36.551 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:36.808 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:37.064 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:37.323 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:37.323 [2024-11-26 20:53:28.057778] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:37.323 [2024-11-26 20:53:28.057820] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945003 ] 00:12:37.323 [2024-11-26 20:53:28.109703] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:37.323 [2024-11-26 20:53:28.112200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.323 [2024-11-26 20:53:28.112232] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdc858a6000 00:12:37.323 [2024-11-26 20:53:28.113189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.114185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.115184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.116192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.117193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.118204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.119206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.120215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.323 [2024-11-26 20:53:28.121220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.323 [2024-11-26 20:53:28.121240] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdc8589b000 00:12:37.323 [2024-11-26 20:53:28.122356] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.323 [2024-11-26 20:53:28.136373] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:37.323 [2024-11-26 20:53:28.136419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:37.323 [2024-11-26 20:53:28.145363] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:37.323 [2024-11-26 20:53:28.145419] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:37.323 [2024-11-26 20:53:28.145512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:37.323 [2024-11-26 20:53:28.145544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:37.323 [2024-11-26 20:53:28.145555] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:37.323 [2024-11-26 20:53:28.146351] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:37.323 [2024-11-26 20:53:28.146378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:37.323 [2024-11-26 20:53:28.146393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:37.323 [2024-11-26 20:53:28.147357] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:37.323 [2024-11-26 20:53:28.147377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:37.323 [2024-11-26 20:53:28.147391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.148364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:37.323 [2024-11-26 20:53:28.148384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.149381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:37.323 [2024-11-26 20:53:28.149402] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:37.323 [2024-11-26 20:53:28.149416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.149428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.149538] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:37.323 [2024-11-26 20:53:28.149546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.149555] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:37.323 [2024-11-26 20:53:28.150388] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:37.323 [2024-11-26 20:53:28.151382] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:37.323 [2024-11-26 20:53:28.152388] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:37.323 [2024-11-26 20:53:28.153398] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.323 [2024-11-26 20:53:28.153496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:37.323 [2024-11-26 20:53:28.154406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:37.323 [2024-11-26 20:53:28.154425] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:37.323 [2024-11-26 20:53:28.154434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:37.323 [2024-11-26 20:53:28.154477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154512] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.323 [2024-11-26 20:53:28.154522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.323 [2024-11-26 20:53:28.154529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.323 [2024-11-26 20:53:28.154548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.323 [2024-11-26 20:53:28.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:37.323 [2024-11-26 20:53:28.154616] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:37.323 [2024-11-26 20:53:28.154625] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:37.323 [2024-11-26 20:53:28.154631] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:37.323 [2024-11-26 20:53:28.154640] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:37.323 [2024-11-26 20:53:28.154648] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:37.323 [2024-11-26 20:53:28.154660] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:37.323 [2024-11-26 20:53:28.154683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:37.323 [2024-11-26 20:53:28.154751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:37.323 [2024-11-26 20:53:28.154770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.323 [2024-11-26 20:53:28.154783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.323 [2024-11-26 20:53:28.154795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.323 [2024-11-26 20:53:28.154807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.323 [2024-11-26 20:53:28.154816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:37.323 [2024-11-26 20:53:28.154848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:37.323 [2024-11-26 20:53:28.154860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:37.323 [2024-11-26 20:53:28.154872] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:37.323 [2024-11-26 20:53:28.154882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.154897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.154910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.154923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155071] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:37.324 [2024-11-26 20:53:28.155080] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:37.324 [2024-11-26 20:53:28.155087] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155135] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:37.324 [2024-11-26 20:53:28.155156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.324 [2024-11-26 20:53:28.155192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.324 [2024-11-26 20:53:28.155198] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.324 [2024-11-26 20:53:28.155291] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.324 [2024-11-26 20:53:28.155297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155397] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:37.324 [2024-11-26 20:53:28.155405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:37.324 [2024-11-26 20:53:28.155413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:37.324 [2024-11-26 20:53:28.155445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155588] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:37.324 [2024-11-26 20:53:28.155598] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:37.324 [2024-11-26 20:53:28.155604] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:37.324 [2024-11-26 20:53:28.155610] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:37.324 [2024-11-26 20:53:28.155615] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:37.324 [2024-11-26 20:53:28.155624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:37.324 [2024-11-26 20:53:28.155636] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:37.324 [2024-11-26 20:53:28.155644] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:37.324 [2024-11-26 20:53:28.155650] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155678] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:37.324 [2024-11-26 20:53:28.155711] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.324 [2024-11-26 20:53:28.155718] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155741] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:37.324 [2024-11-26 20:53:28.155749] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:37.324 [2024-11-26 20:53:28.155755] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:37.324 [2024-11-26 20:53:28.155763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:37.324 [2024-11-26 20:53:28.155775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:37.324 [2024-11-26 20:53:28.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:37.324 ===================================================== 00:12:37.324 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.324 ===================================================== 00:12:37.324 Controller Capabilities/Features 00:12:37.324 ================================ 00:12:37.324 Vendor ID: 4e58 00:12:37.324 Subsystem Vendor ID: 4e58 00:12:37.324 Serial Number: SPDK1 00:12:37.324 Model Number: SPDK bdev Controller 00:12:37.324 Firmware Version: 25.01 00:12:37.324 Recommended Arb Burst: 6 00:12:37.324 IEEE OUI Identifier: 8d 6b 50 00:12:37.324 Multi-path I/O 00:12:37.324 May have multiple subsystem ports: Yes 00:12:37.324 May have multiple controllers: Yes 00:12:37.324 Associated with SR-IOV VF: No 00:12:37.324 Max Data Transfer Size: 131072 00:12:37.324 Max Number of Namespaces: 32 00:12:37.324 Max Number of I/O Queues: 127 00:12:37.324 NVMe Specification Version (VS): 1.3 00:12:37.324 NVMe Specification Version (Identify): 1.3 00:12:37.324 Maximum Queue Entries: 256 00:12:37.324 Contiguous Queues Required: Yes 00:12:37.324 Arbitration Mechanisms Supported 00:12:37.324 Weighted Round Robin: Not Supported 00:12:37.324 Vendor Specific: Not Supported 00:12:37.324 Reset Timeout: 15000 ms 00:12:37.324 Doorbell Stride: 4 bytes 00:12:37.324 NVM Subsystem Reset: Not Supported 00:12:37.324 Command Sets Supported 00:12:37.324 NVM Command Set: Supported 00:12:37.324 Boot Partition: Not Supported 00:12:37.324 Memory Page Size Minimum: 4096 bytes 00:12:37.324 Memory Page Size Maximum: 4096 bytes 00:12:37.324 Persistent Memory Region: Not Supported 00:12:37.324 Optional Asynchronous Events Supported 00:12:37.324 Namespace Attribute Notices: Supported 00:12:37.324 Firmware Activation Notices: Not Supported 00:12:37.324 ANA Change Notices: Not Supported 00:12:37.324 PLE Aggregate Log Change Notices: Not Supported 00:12:37.324 LBA Status Info Alert Notices: Not Supported 00:12:37.324 EGE Aggregate Log Change Notices: Not Supported 00:12:37.324 Normal NVM Subsystem Shutdown event: Not Supported 00:12:37.324 Zone Descriptor Change Notices: Not Supported 00:12:37.324 Discovery Log Change Notices: Not Supported 00:12:37.324 Controller Attributes 00:12:37.324 128-bit Host Identifier: Supported 00:12:37.324 Non-Operational Permissive Mode: Not Supported 00:12:37.324 NVM Sets: Not Supported 00:12:37.324 Read Recovery Levels: Not Supported 00:12:37.324 Endurance Groups: Not Supported 00:12:37.324 Predictable Latency Mode: Not Supported 00:12:37.324 Traffic Based Keep ALive: Not Supported 00:12:37.324 Namespace Granularity: Not Supported 00:12:37.324 SQ Associations: Not Supported 00:12:37.324 UUID List: Not Supported 00:12:37.324 Multi-Domain Subsystem: Not Supported 00:12:37.324 Fixed Capacity Management: Not Supported 00:12:37.324 Variable Capacity Management: Not Supported 00:12:37.324 Delete Endurance Group: Not Supported 00:12:37.324 Delete NVM Set: Not Supported 00:12:37.324 Extended LBA Formats Supported: Not Supported 00:12:37.324 Flexible Data Placement Supported: Not Supported 00:12:37.324 00:12:37.324 Controller Memory Buffer Support 00:12:37.324 ================================ 00:12:37.324 Supported: No 00:12:37.324 00:12:37.324 Persistent Memory Region Support 00:12:37.324 ================================ 00:12:37.324 Supported: No 00:12:37.324 00:12:37.324 Admin Command Set Attributes 00:12:37.324 ============================ 00:12:37.324 Security Send/Receive: Not Supported 00:12:37.324 Format NVM: Not Supported 00:12:37.324 Firmware Activate/Download: Not Supported 00:12:37.324 Namespace Management: Not Supported 00:12:37.324 Device Self-Test: Not Supported 00:12:37.324 Directives: Not Supported 00:12:37.324 NVMe-MI: Not Supported 00:12:37.324 Virtualization Management: Not Supported 00:12:37.324 Doorbell Buffer Config: Not Supported 00:12:37.324 Get LBA Status Capability: Not Supported 00:12:37.324 Command & Feature Lockdown Capability: Not Supported 00:12:37.324 Abort Command Limit: 4 00:12:37.324 Async Event Request Limit: 4 00:12:37.324 Number of Firmware Slots: N/A 00:12:37.324 Firmware Slot 1 Read-Only: N/A 00:12:37.324 Firmware Activation Without Reset: N/A 00:12:37.324 Multiple Update Detection Support: N/A 00:12:37.324 Firmware Update Granularity: No Information Provided 00:12:37.324 Per-Namespace SMART Log: No 00:12:37.324 Asymmetric Namespace Access Log Page: Not Supported 00:12:37.324 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:37.324 Command Effects Log Page: Supported 00:12:37.324 Get Log Page Extended Data: Supported 00:12:37.324 Telemetry Log Pages: Not Supported 00:12:37.324 Persistent Event Log Pages: Not Supported 00:12:37.324 Supported Log Pages Log Page: May Support 00:12:37.324 Commands Supported & Effects Log Page: Not Supported 00:12:37.324 Feature Identifiers & Effects Log Page:May Support 00:12:37.324 NVMe-MI Commands & Effects Log Page: May Support 00:12:37.324 Data Area 4 for Telemetry Log: Not Supported 00:12:37.324 Error Log Page Entries Supported: 128 00:12:37.324 Keep Alive: Supported 00:12:37.324 Keep Alive Granularity: 10000 ms 00:12:37.324 00:12:37.324 NVM Command Set Attributes 00:12:37.324 ========================== 00:12:37.324 Submission Queue Entry Size 00:12:37.324 Max: 64 00:12:37.324 Min: 64 00:12:37.324 Completion Queue Entry Size 00:12:37.324 Max: 16 00:12:37.324 Min: 16 00:12:37.324 Number of Namespaces: 32 00:12:37.324 Compare Command: Supported 00:12:37.324 Write Uncorrectable Command: Not Supported 00:12:37.324 Dataset Management Command: Supported 00:12:37.324 Write Zeroes Command: Supported 00:12:37.324 Set Features Save Field: Not Supported 00:12:37.324 Reservations: Not Supported 00:12:37.324 Timestamp: Not Supported 00:12:37.324 Copy: Supported 00:12:37.324 Volatile Write Cache: Present 00:12:37.324 Atomic Write Unit (Normal): 1 00:12:37.324 Atomic Write Unit (PFail): 1 00:12:37.324 Atomic Compare & Write Unit: 1 00:12:37.324 Fused Compare & Write: Supported 00:12:37.324 Scatter-Gather List 00:12:37.324 SGL Command Set: Supported (Dword aligned) 00:12:37.324 SGL Keyed: Not Supported 00:12:37.324 SGL Bit Bucket Descriptor: Not Supported 00:12:37.324 SGL Metadata Pointer: Not Supported 00:12:37.324 Oversized SGL: Not Supported 00:12:37.324 SGL Metadata Address: Not Supported 00:12:37.324 SGL Offset: Not Supported 00:12:37.324 Transport SGL Data Block: Not Supported 00:12:37.324 Replay Protected Memory Block: Not Supported 00:12:37.324 00:12:37.324 Firmware Slot Information 00:12:37.324 ========================= 00:12:37.324 Active slot: 1 00:12:37.324 Slot 1 Firmware Revision: 25.01 00:12:37.324 00:12:37.324 00:12:37.324 Commands Supported and Effects 00:12:37.324 ============================== 00:12:37.324 Admin Commands 00:12:37.324 -------------- 00:12:37.324 Get Log Page (02h): Supported 00:12:37.324 Identify (06h): Supported 00:12:37.324 Abort (08h): Supported 00:12:37.324 Set Features (09h): Supported 00:12:37.324 Get Features (0Ah): Supported 00:12:37.324 Asynchronous Event Request (0Ch): Supported 00:12:37.324 Keep Alive (18h): Supported 00:12:37.324 I/O Commands 00:12:37.324 ------------ 00:12:37.324 Flush (00h): Supported LBA-Change 00:12:37.324 Write (01h): Supported LBA-Change 00:12:37.324 Read (02h): Supported 00:12:37.324 Compare (05h): Supported 00:12:37.324 Write Zeroes (08h): Supported LBA-Change 00:12:37.324 Dataset Management (09h): Supported LBA-Change 00:12:37.324 Copy (19h): Supported LBA-Change 00:12:37.324 00:12:37.324 Error Log 00:12:37.324 ========= 00:12:37.325 00:12:37.325 Arbitration 00:12:37.325 =========== 00:12:37.325 Arbitration Burst: 1 00:12:37.325 00:12:37.325 Power Management 00:12:37.325 ================ 00:12:37.325 Number of Power States: 1 00:12:37.325 Current Power State: Power State #0 00:12:37.325 Power State #0: 00:12:37.325 Max Power: 0.00 W 00:12:37.325 Non-Operational State: Operational 00:12:37.325 Entry Latency: Not Reported 00:12:37.325 Exit Latency: Not Reported 00:12:37.325 Relative Read Throughput: 0 00:12:37.325 Relative Read Latency: 0 00:12:37.325 Relative Write Throughput: 0 00:12:37.325 Relative Write Latency: 0 00:12:37.325 Idle Power: Not Reported 00:12:37.325 Active Power: Not Reported 00:12:37.325 Non-Operational Permissive Mode: Not Supported 00:12:37.325 00:12:37.325 Health Information 00:12:37.325 ================== 00:12:37.325 Critical Warnings: 00:12:37.325 Available Spare Space: OK 00:12:37.325 Temperature: OK 00:12:37.325 Device Reliability: OK 00:12:37.325 Read Only: No 00:12:37.325 Volatile Memory Backup: OK 00:12:37.325 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:37.325 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:37.325 Available Spare: 0% 00:12:37.325 Available Sp[2024-11-26 20:53:28.155955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:37.325 [2024-11-26 20:53:28.155973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:37.325 [2024-11-26 20:53:28.156043] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:37.325 [2024-11-26 20:53:28.156069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.325 [2024-11-26 20:53:28.156080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.325 [2024-11-26 20:53:28.156089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.325 [2024-11-26 20:53:28.156098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.325 [2024-11-26 20:53:28.156420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:37.325 [2024-11-26 20:53:28.156441] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:37.325 [2024-11-26 20:53:28.157413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.325 [2024-11-26 20:53:28.157489] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:37.325 [2024-11-26 20:53:28.157504] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:37.325 [2024-11-26 20:53:28.158426] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:37.325 [2024-11-26 20:53:28.158461] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:37.325 [2024-11-26 20:53:28.158533] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:37.325 [2024-11-26 20:53:28.161700] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.325 are Threshold: 0% 00:12:37.325 Life Percentage Used: 0% 00:12:37.325 Data Units Read: 0 00:12:37.325 Data Units Written: 0 00:12:37.325 Host Read Commands: 0 00:12:37.325 Host Write Commands: 0 00:12:37.325 Controller Busy Time: 0 minutes 00:12:37.325 Power Cycles: 0 00:12:37.325 Power On Hours: 0 hours 00:12:37.325 Unsafe Shutdowns: 0 00:12:37.325 Unrecoverable Media Errors: 0 00:12:37.325 Lifetime Error Log Entries: 0 00:12:37.325 Warning Temperature Time: 0 minutes 00:12:37.325 Critical Temperature Time: 0 minutes 00:12:37.325 00:12:37.325 Number of Queues 00:12:37.325 ================ 00:12:37.325 Number of I/O Submission Queues: 127 00:12:37.325 Number of I/O Completion Queues: 127 00:12:37.325 00:12:37.325 Active Namespaces 00:12:37.325 ================= 00:12:37.325 Namespace ID:1 00:12:37.325 Error Recovery Timeout: Unlimited 00:12:37.325 Command Set Identifier: NVM (00h) 00:12:37.325 Deallocate: Supported 00:12:37.325 Deallocated/Unwritten Error: Not Supported 00:12:37.325 Deallocated Read Value: Unknown 00:12:37.325 Deallocate in Write Zeroes: Not Supported 00:12:37.325 Deallocated Guard Field: 0xFFFF 00:12:37.325 Flush: Supported 00:12:37.325 Reservation: Supported 00:12:37.325 Namespace Sharing Capabilities: Multiple Controllers 00:12:37.325 Size (in LBAs): 131072 (0GiB) 00:12:37.325 Capacity (in LBAs): 131072 (0GiB) 00:12:37.325 Utilization (in LBAs): 131072 (0GiB) 00:12:37.325 NGUID: E33593F8AE0449D39DB0BE04F4B52EA8 00:12:37.325 UUID: e33593f8-ae04-49d3-9db0-be04f4b52ea8 00:12:37.325 Thin Provisioning: Not Supported 00:12:37.325 Per-NS Atomic Units: Yes 00:12:37.325 Atomic Boundary Size (Normal): 0 00:12:37.325 Atomic Boundary Size (PFail): 0 00:12:37.325 Atomic Boundary Offset: 0 00:12:37.325 Maximum Single Source Range Length: 65535 00:12:37.325 Maximum Copy Length: 65535 00:12:37.325 Maximum Source Range Count: 1 00:12:37.325 NGUID/EUI64 Never Reused: No 00:12:37.325 Namespace Write Protected: No 00:12:37.325 Number of LBA Formats: 1 00:12:37.325 Current LBA Format: LBA Format #00 00:12:37.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:37.325 00:12:37.325 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:37.582 [2024-11-26 20:53:28.414005] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.842 Initializing NVMe Controllers 00:12:42.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:42.842 Initialization complete. Launching workers. 00:12:42.842 ======================================================== 00:12:42.842 Latency(us) 00:12:42.842 Device Information : IOPS MiB/s Average min max 00:12:42.842 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31292.13 122.23 4090.09 1204.76 8365.75 00:12:42.842 ======================================================== 00:12:42.842 Total : 31292.13 122.23 4090.09 1204.76 8365.75 00:12:42.842 00:12:42.842 [2024-11-26 20:53:33.433238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.842 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:42.842 [2024-11-26 20:53:33.689428] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.107 Initializing NVMe Controllers 00:12:48.107 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:48.107 Initialization complete. Launching workers. 00:12:48.107 ======================================================== 00:12:48.107 Latency(us) 00:12:48.107 Device Information : IOPS MiB/s Average min max 00:12:48.107 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16017.13 62.57 7996.67 5986.24 15827.31 00:12:48.107 ======================================================== 00:12:48.107 Total : 16017.13 62.57 7996.67 5986.24 15827.31 00:12:48.107 00:12:48.107 [2024-11-26 20:53:38.728056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.107 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:48.107 [2024-11-26 20:53:38.974202] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.371 [2024-11-26 20:53:44.039025] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.371 Initializing NVMe Controllers 00:12:53.371 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.371 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.371 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:53.371 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:53.371 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:53.371 Initialization complete. Launching workers. 00:12:53.371 Starting thread on core 2 00:12:53.371 Starting thread on core 3 00:12:53.371 Starting thread on core 1 00:12:53.371 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:53.629 [2024-11-26 20:53:44.358142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:56.911 [2024-11-26 20:53:47.422999] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:56.911 Initializing NVMe Controllers 00:12:56.911 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:56.911 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:56.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:56.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:56.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:56.911 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:56.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:56.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:56.911 Initialization complete. Launching workers. 00:12:56.911 Starting thread on core 1 with urgent priority queue 00:12:56.911 Starting thread on core 2 with urgent priority queue 00:12:56.911 Starting thread on core 3 with urgent priority queue 00:12:56.911 Starting thread on core 0 with urgent priority queue 00:12:56.911 SPDK bdev Controller (SPDK1 ) core 0: 5356.33 IO/s 18.67 secs/100000 ios 00:12:56.911 SPDK bdev Controller (SPDK1 ) core 1: 4995.33 IO/s 20.02 secs/100000 ios 00:12:56.911 SPDK bdev Controller (SPDK1 ) core 2: 4111.67 IO/s 24.32 secs/100000 ios 00:12:56.911 SPDK bdev Controller (SPDK1 ) core 3: 4806.33 IO/s 20.81 secs/100000 ios 00:12:56.911 ======================================================== 00:12:56.911 00:12:56.911 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:56.911 [2024-11-26 20:53:47.743287] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:56.911 Initializing NVMe Controllers 00:12:56.911 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:56.911 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:56.911 Namespace ID: 1 size: 0GB 00:12:56.911 Initialization complete. 00:12:56.911 INFO: using host memory buffer for IO 00:12:56.911 Hello world! 00:12:56.911 [2024-11-26 20:53:47.777954] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:56.911 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:57.169 [2024-11-26 20:53:48.091159] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.538 Initializing NVMe Controllers 00:12:58.538 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.538 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.538 Initialization complete. Launching workers. 00:12:58.538 submit (in ns) avg, min, max = 7643.4, 3514.4, 4017868.9 00:12:58.538 complete (in ns) avg, min, max = 25869.4, 2075.6, 4997121.1 00:12:58.538 00:12:58.538 Submit histogram 00:12:58.538 ================ 00:12:58.538 Range in us Cumulative Count 00:12:58.538 3.508 - 3.532: 0.3229% ( 42) 00:12:58.538 3.532 - 3.556: 1.0763% ( 98) 00:12:58.538 3.556 - 3.579: 3.8361% ( 359) 00:12:58.538 3.579 - 3.603: 8.9791% ( 669) 00:12:58.538 3.603 - 3.627: 17.4277% ( 1099) 00:12:58.538 3.627 - 3.650: 27.6907% ( 1335) 00:12:58.538 3.650 - 3.674: 37.7153% ( 1304) 00:12:58.538 3.674 - 3.698: 45.2875% ( 985) 00:12:58.538 3.698 - 3.721: 51.3761% ( 792) 00:12:58.538 3.721 - 3.745: 55.7196% ( 565) 00:12:58.538 3.745 - 3.769: 59.7248% ( 521) 00:12:58.538 3.769 - 3.793: 63.6454% ( 510) 00:12:58.538 3.793 - 3.816: 66.9819% ( 434) 00:12:58.538 3.816 - 3.840: 70.5796% ( 468) 00:12:58.538 3.840 - 3.864: 74.7617% ( 544) 00:12:58.538 3.864 - 3.887: 79.1282% ( 568) 00:12:58.539 3.887 - 3.911: 82.9028% ( 491) 00:12:58.539 3.911 - 3.935: 85.9702% ( 399) 00:12:58.539 3.935 - 3.959: 87.8613% ( 246) 00:12:58.539 3.959 - 3.982: 89.3373% ( 192) 00:12:58.539 3.982 - 4.006: 90.7903% ( 189) 00:12:58.539 4.006 - 4.030: 92.0818% ( 168) 00:12:58.539 4.030 - 4.053: 93.1504% ( 139) 00:12:58.539 4.053 - 4.077: 93.9114% ( 99) 00:12:58.539 4.077 - 4.101: 94.7724% ( 112) 00:12:58.539 4.101 - 4.124: 95.5028% ( 95) 00:12:58.539 4.124 - 4.148: 95.9486% ( 58) 00:12:58.539 4.148 - 4.172: 96.3330% ( 50) 00:12:58.539 4.172 - 4.196: 96.5560% ( 29) 00:12:58.539 4.196 - 4.219: 96.6559% ( 13) 00:12:58.539 4.219 - 4.243: 96.7635% ( 14) 00:12:58.539 4.243 - 4.267: 96.8712% ( 14) 00:12:58.539 4.267 - 4.290: 97.0018% ( 17) 00:12:58.539 4.290 - 4.314: 97.0941% ( 12) 00:12:58.539 4.314 - 4.338: 97.1863% ( 12) 00:12:58.539 4.338 - 4.361: 97.2402% ( 7) 00:12:58.539 4.361 - 4.385: 97.2555% ( 2) 00:12:58.539 4.385 - 4.409: 97.3247% ( 9) 00:12:58.539 4.409 - 4.433: 97.3555% ( 4) 00:12:58.539 4.433 - 4.456: 97.3708% ( 2) 00:12:58.539 4.456 - 4.480: 97.3939% ( 3) 00:12:58.539 4.480 - 4.504: 97.4093% ( 2) 00:12:58.539 4.504 - 4.527: 97.4170% ( 1) 00:12:58.539 4.527 - 4.551: 97.4247% ( 1) 00:12:58.539 4.551 - 4.575: 97.4477% ( 3) 00:12:58.539 4.575 - 4.599: 97.4785% ( 4) 00:12:58.539 4.599 - 4.622: 97.5015% ( 3) 00:12:58.539 4.622 - 4.646: 97.5554% ( 7) 00:12:58.539 4.646 - 4.670: 97.5938% ( 5) 00:12:58.539 4.670 - 4.693: 97.6092% ( 2) 00:12:58.539 4.693 - 4.717: 97.6476% ( 5) 00:12:58.539 4.717 - 4.741: 97.7168% ( 9) 00:12:58.539 4.741 - 4.764: 97.7706% ( 7) 00:12:58.539 4.764 - 4.788: 97.8090% ( 5) 00:12:58.539 4.788 - 4.812: 97.8398% ( 4) 00:12:58.539 4.812 - 4.836: 97.8705% ( 4) 00:12:58.539 4.836 - 4.859: 97.8936% ( 3) 00:12:58.539 4.859 - 4.883: 97.9167% ( 3) 00:12:58.539 4.883 - 4.907: 97.9551% ( 5) 00:12:58.539 4.907 - 4.930: 98.0012% ( 6) 00:12:58.539 4.930 - 4.954: 98.0320% ( 4) 00:12:58.539 4.954 - 4.978: 98.0550% ( 3) 00:12:58.539 4.978 - 5.001: 98.0704% ( 2) 00:12:58.539 5.001 - 5.025: 98.1012% ( 4) 00:12:58.539 5.025 - 5.049: 98.1165% ( 2) 00:12:58.539 5.049 - 5.073: 98.1319% ( 2) 00:12:58.539 5.073 - 5.096: 98.1627% ( 4) 00:12:58.539 5.096 - 5.120: 98.1780% ( 2) 00:12:58.539 5.191 - 5.215: 98.1857% ( 1) 00:12:58.539 5.215 - 5.239: 98.1934% ( 1) 00:12:58.539 5.262 - 5.286: 98.2011% ( 1) 00:12:58.539 5.404 - 5.428: 98.2088% ( 1) 00:12:58.539 5.428 - 5.452: 98.2165% ( 1) 00:12:58.539 5.499 - 5.523: 98.2242% ( 1) 00:12:58.539 5.594 - 5.618: 98.2319% ( 1) 00:12:58.539 5.950 - 5.973: 98.2395% ( 1) 00:12:58.539 5.973 - 5.997: 98.2472% ( 1) 00:12:58.539 6.400 - 6.447: 98.2549% ( 1) 00:12:58.539 6.542 - 6.590: 98.2626% ( 1) 00:12:58.539 6.684 - 6.732: 98.2703% ( 1) 00:12:58.539 6.732 - 6.779: 98.2780% ( 1) 00:12:58.539 6.779 - 6.827: 98.2857% ( 1) 00:12:58.539 6.827 - 6.874: 98.3010% ( 2) 00:12:58.539 6.874 - 6.921: 98.3087% ( 1) 00:12:58.539 6.921 - 6.969: 98.3241% ( 2) 00:12:58.539 6.969 - 7.016: 98.3395% ( 2) 00:12:58.539 7.064 - 7.111: 98.3472% ( 1) 00:12:58.539 7.111 - 7.159: 98.3625% ( 2) 00:12:58.539 7.396 - 7.443: 98.3779% ( 2) 00:12:58.539 7.443 - 7.490: 98.3856% ( 1) 00:12:58.539 7.585 - 7.633: 98.4087% ( 3) 00:12:58.539 7.633 - 7.680: 98.4240% ( 2) 00:12:58.539 7.680 - 7.727: 98.4317% ( 1) 00:12:58.539 7.727 - 7.775: 98.4471% ( 2) 00:12:58.539 7.775 - 7.822: 98.4702% ( 3) 00:12:58.539 7.822 - 7.870: 98.4779% ( 1) 00:12:58.539 7.917 - 7.964: 98.4855% ( 1) 00:12:58.539 7.964 - 8.012: 98.4932% ( 1) 00:12:58.539 8.012 - 8.059: 98.5009% ( 1) 00:12:58.539 8.059 - 8.107: 98.5163% ( 2) 00:12:58.539 8.154 - 8.201: 98.5317% ( 2) 00:12:58.539 8.201 - 8.249: 98.5394% ( 1) 00:12:58.539 8.296 - 8.344: 98.5470% ( 1) 00:12:58.539 8.344 - 8.391: 98.5547% ( 1) 00:12:58.539 8.391 - 8.439: 98.5624% ( 1) 00:12:58.539 8.439 - 8.486: 98.5701% ( 1) 00:12:58.539 8.533 - 8.581: 98.5778% ( 1) 00:12:58.539 8.581 - 8.628: 98.5855% ( 1) 00:12:58.539 8.676 - 8.723: 98.5932% ( 1) 00:12:58.539 8.723 - 8.770: 98.6085% ( 2) 00:12:58.539 8.913 - 8.960: 98.6162% ( 1) 00:12:58.539 9.007 - 9.055: 98.6316% ( 2) 00:12:58.539 9.055 - 9.102: 98.6393% ( 1) 00:12:58.539 9.150 - 9.197: 98.6470% ( 1) 00:12:58.539 9.434 - 9.481: 98.6547% ( 1) 00:12:58.539 9.908 - 9.956: 98.6624% ( 1) 00:12:58.539 10.050 - 10.098: 98.6700% ( 1) 00:12:58.539 10.098 - 10.145: 98.6777% ( 1) 00:12:58.539 10.619 - 10.667: 98.6854% ( 1) 00:12:58.539 10.714 - 10.761: 98.6931% ( 1) 00:12:58.539 10.904 - 10.951: 98.7008% ( 1) 00:12:58.539 10.951 - 10.999: 98.7162% ( 2) 00:12:58.539 10.999 - 11.046: 98.7239% ( 1) 00:12:58.539 11.473 - 11.520: 98.7315% ( 1) 00:12:58.539 11.520 - 11.567: 98.7392% ( 1) 00:12:58.539 11.662 - 11.710: 98.7469% ( 1) 00:12:58.539 11.757 - 11.804: 98.7546% ( 1) 00:12:58.539 11.852 - 11.899: 98.7623% ( 1) 00:12:58.539 11.899 - 11.947: 98.7700% ( 1) 00:12:58.539 12.089 - 12.136: 98.7777% ( 1) 00:12:58.539 12.326 - 12.421: 98.7931% ( 2) 00:12:58.539 12.705 - 12.800: 98.8007% ( 1) 00:12:58.539 12.990 - 13.084: 98.8084% ( 1) 00:12:58.539 13.084 - 13.179: 98.8161% ( 1) 00:12:58.539 13.369 - 13.464: 98.8238% ( 1) 00:12:58.539 13.559 - 13.653: 98.8315% ( 1) 00:12:58.539 13.843 - 13.938: 98.8469% ( 2) 00:12:58.539 14.696 - 14.791: 98.8546% ( 1) 00:12:58.539 14.886 - 14.981: 98.8699% ( 2) 00:12:58.539 15.076 - 15.170: 98.8776% ( 1) 00:12:58.539 16.972 - 17.067: 98.8853% ( 1) 00:12:58.539 17.067 - 17.161: 98.9007% ( 2) 00:12:58.539 17.256 - 17.351: 98.9084% ( 1) 00:12:58.539 17.351 - 17.446: 98.9237% ( 2) 00:12:58.539 17.446 - 17.541: 98.9699% ( 6) 00:12:58.539 17.541 - 17.636: 98.9852% ( 2) 00:12:58.539 17.636 - 17.730: 99.0160% ( 4) 00:12:58.539 17.730 - 17.825: 99.0621% ( 6) 00:12:58.539 17.920 - 18.015: 99.1159% ( 7) 00:12:58.539 18.015 - 18.110: 99.1697% ( 7) 00:12:58.539 18.110 - 18.204: 99.2389% ( 9) 00:12:58.539 18.204 - 18.299: 99.3619% ( 16) 00:12:58.539 18.299 - 18.394: 99.4542% ( 12) 00:12:58.539 18.394 - 18.489: 99.5464% ( 12) 00:12:58.539 18.489 - 18.584: 99.6002% ( 7) 00:12:58.539 18.584 - 18.679: 99.6541% ( 7) 00:12:58.539 18.679 - 18.773: 99.6848% ( 4) 00:12:58.539 18.773 - 18.868: 99.7156% ( 4) 00:12:58.539 18.868 - 18.963: 99.7617% ( 6) 00:12:58.539 18.963 - 19.058: 99.7771% ( 2) 00:12:58.539 19.058 - 19.153: 99.7924% ( 2) 00:12:58.539 19.153 - 19.247: 99.8001% ( 1) 00:12:58.539 19.247 - 19.342: 99.8232% ( 3) 00:12:58.539 19.342 - 19.437: 99.8309% ( 1) 00:12:58.539 19.437 - 19.532: 99.8386% ( 1) 00:12:58.539 19.532 - 19.627: 99.8539% ( 2) 00:12:58.539 19.911 - 20.006: 99.8693% ( 2) 00:12:58.539 20.670 - 20.764: 99.8770% ( 1) 00:12:58.539 21.049 - 21.144: 99.8847% ( 1) 00:12:58.539 21.428 - 21.523: 99.8924% ( 1) 00:12:58.539 22.376 - 22.471: 99.9001% ( 1) 00:12:58.539 23.230 - 23.324: 99.9077% ( 1) 00:12:58.539 3980.705 - 4004.978: 99.9692% ( 8) 00:12:58.539 4004.978 - 4029.250: 100.0000% ( 4) 00:12:58.539 00:12:58.539 Complete histogram 00:12:58.539 ================== 00:12:58.539 Range in us Cumulative Count 00:12:58.539 2.074 - 2.086: 6.5652% ( 854) 00:12:58.539 2.086 - 2.098: 37.4231% ( 4014) 00:12:58.539 2.098 - 2.110: 41.9280% ( 586) 00:12:58.539 2.110 - 2.121: 50.5535% ( 1122) 00:12:58.539 2.121 - 2.133: 59.0867% ( 1110) 00:12:58.539 2.133 - 2.145: 60.4013% ( 171) 00:12:58.539 2.145 - 2.157: 67.5431% ( 929) 00:12:58.539 2.157 - 2.169: 75.8072% ( 1075) 00:12:58.539 2.169 - 2.181: 76.9757% ( 152) 00:12:58.539 2.181 - 2.193: 79.4588% ( 323) 00:12:58.539 2.193 - 2.204: 81.3961% ( 252) 00:12:58.539 2.204 - 2.216: 81.9111% ( 67) 00:12:58.539 2.216 - 2.228: 84.4250% ( 327) 00:12:58.539 2.228 - 2.240: 88.2303% ( 495) 00:12:58.539 2.240 - 2.252: 90.6365% ( 313) 00:12:58.539 2.252 - 2.264: 92.2817% ( 214) 00:12:58.539 2.264 - 2.276: 93.2580% ( 127) 00:12:58.539 2.276 - 2.287: 93.5194% ( 34) 00:12:58.539 2.287 - 2.299: 93.9960% ( 62) 00:12:58.539 2.299 - 2.311: 94.3650% ( 48) 00:12:58.539 2.311 - 2.323: 95.1030% ( 96) 00:12:58.539 2.323 - 2.335: 95.4413% ( 44) 00:12:58.539 2.335 - 2.347: 95.5105% ( 9) 00:12:58.539 2.347 - 2.359: 95.5643% ( 7) 00:12:58.539 2.359 - 2.370: 95.6488% ( 11) 00:12:58.540 2.370 - 2.382: 95.8180% ( 22) 00:12:58.540 2.382 - 2.394: 96.1716% ( 46) 00:12:58.540 2.394 - 2.406: 96.5944% ( 55) 00:12:58.540 2.406 - 2.418: 96.8712% ( 36) 00:12:58.540 2.418 - 2.430: 97.1095% ( 31) 00:12:58.540 2.430 - 2.441: 97.2863% ( 23) 00:12:58.540 2.441 - 2.453: 97.4554% ( 22) 00:12:58.540 2.453 - 2.465: 97.6245% ( 22) 00:12:58.540 2.465 - 2.477: 97.8167% ( 25) 00:12:58.540 2.477 - 2.489: 97.9705% ( 20) 00:12:58.540 2.489 - 2.501: 98.0627% ( 12) 00:12:58.540 2.501 - 2.513: 98.1550% ( 12) 00:12:58.540 2.513 - 2.524: 98.2242% ( 9) 00:12:58.540 2.524 - 2.536: 98.2703% ( 6) 00:12:58.540 2.536 - 2.548: 98.3010% ( 4) 00:12:58.540 2.548 - 2.560: 98.3472% ( 6) 00:12:58.540 2.560 - 2.572: 98.3779% ( 4) 00:12:58.540 2.572 - 2.584: 98.4010% ( 3) 00:12:58.540 2.584 - 2.596: 98.4164% ( 2) 00:12:58.540 2.596 - 2.607: 98.4240% ( 1) 00:12:58.540 2.607 - 2.619: 98.4317% ( 1) 00:12:58.540 2.631 - 2.643: 98.4471% ( 2) 00:12:58.540 2.655 - 2.667: 98.4548% ( 1) 00:12:58.540 2.690 - 2.702: 98.4625% ( 1) 00:12:58.540 2.797 - 2.809: 98.4702% ( 1) 00:12:58.540 2.821 - 2.833: 98.4779% ( 1) 00:12:58.540 2.833 - 2.844: 98.4855% ( 1) 00:12:58.540 2.844 - 2.856: 98.4932% ( 1) 00:12:58.540 2.939 - 2.951: 98.5009% ( 1) 00:12:58.540 3.153 - 3.176: 98.5086% ( 1) 00:12:58.540 3.224 - 3.247: 98.5163% ( 1) 00:12:58.540 3.295 - 3.319: 98.5240% ( 1) 00:12:58.540 3.366 - 3.390: 98.5317% ( 1) 00:12:58.540 3.390 - 3.413: 98.5470% ( 2) 00:12:58.540 3.413 - 3.437: 98.5624% ( 2) 00:12:58.540 3.437 - 3.461: 98.5701% ( 1) 00:12:58.540 3.484 - 3.508: 98.5778% ( 1) 00:12:58.540 3.532 - 3.556: 98.5932% ( 2) 00:12:58.540 3.556 - 3.579: 98.6009% ( 1) 00:12:58.540 3.627 - 3.650: 98.6085% ( 1) 00:12:58.540 3.674 - 3.698: 98.6239% ( 2) 00:12:58.540 3.959 - 3.982: 98.6316% ( 1) 00:12:58.540 4.030 - 4.053: 98.6470% ( 2) 00:12:58.540 4.053 - 4.077: 98.6624% ( 2) 00:12:58.540 4.243 - 4.267: 98.6700% ( 1) 00:12:58.540 5.144 - 5.167: 98.6777% ( 1) 00:12:58.540 5.262 - 5.286: 98.6854% ( 1) 00:12:58.540 5.333 - 5.357: 98.6931% ( 1) 00:12:58.540 5.357 - 5.381: 98.7008% ( 1) 00:12:58.540 5.404 - 5.428: 98.7085% ( 1) 00:12:58.540 5.476 - 5.499: 98.7162% ( 1) 00:12:58.540 5.689 - 5.713: 98.7239% ( 1) 00:12:58.540 5.760 - 5.784: 9[2024-11-26 20:53:49.113437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.540 8.7315% ( 1) 00:12:58.540 5.784 - 5.807: 98.7469% ( 2) 00:12:58.540 5.807 - 5.831: 98.7546% ( 1) 00:12:58.540 5.902 - 5.926: 98.7623% ( 1) 00:12:58.540 5.950 - 5.973: 98.7700% ( 1) 00:12:58.540 5.997 - 6.021: 98.7777% ( 1) 00:12:58.540 6.068 - 6.116: 98.7854% ( 1) 00:12:58.540 6.305 - 6.353: 98.7931% ( 1) 00:12:58.540 6.400 - 6.447: 98.8007% ( 1) 00:12:58.540 6.447 - 6.495: 98.8084% ( 1) 00:12:58.540 6.590 - 6.637: 98.8161% ( 1) 00:12:58.540 6.684 - 6.732: 98.8238% ( 1) 00:12:58.540 6.827 - 6.874: 98.8315% ( 1) 00:12:58.540 7.253 - 7.301: 98.8392% ( 1) 00:12:58.540 8.201 - 8.249: 98.8469% ( 1) 00:12:58.540 8.533 - 8.581: 98.8546% ( 1) 00:12:58.540 11.899 - 11.947: 98.8622% ( 1) 00:12:58.540 15.644 - 15.739: 98.8776% ( 2) 00:12:58.540 15.739 - 15.834: 98.8853% ( 1) 00:12:58.540 15.834 - 15.929: 98.9007% ( 2) 00:12:58.540 15.929 - 16.024: 98.9391% ( 5) 00:12:58.540 16.024 - 16.119: 98.9852% ( 6) 00:12:58.540 16.119 - 16.213: 99.0083% ( 3) 00:12:58.540 16.213 - 16.308: 99.0467% ( 5) 00:12:58.540 16.308 - 16.403: 99.0698% ( 3) 00:12:58.540 16.403 - 16.498: 99.0852% ( 2) 00:12:58.540 16.498 - 16.593: 99.1082% ( 3) 00:12:58.540 16.593 - 16.687: 99.1697% ( 8) 00:12:58.540 16.687 - 16.782: 99.2005% ( 4) 00:12:58.540 16.782 - 16.877: 99.2082% ( 1) 00:12:58.540 16.877 - 16.972: 99.2543% ( 6) 00:12:58.540 16.972 - 17.067: 99.2697% ( 2) 00:12:58.540 17.067 - 17.161: 99.2774% ( 1) 00:12:58.540 17.161 - 17.256: 99.2927% ( 2) 00:12:58.540 17.256 - 17.351: 99.3004% ( 1) 00:12:58.540 17.351 - 17.446: 99.3158% ( 2) 00:12:58.540 17.541 - 17.636: 99.3312% ( 2) 00:12:58.540 17.636 - 17.730: 99.3389% ( 1) 00:12:58.540 17.730 - 17.825: 99.3542% ( 2) 00:12:58.540 18.015 - 18.110: 99.3696% ( 2) 00:12:58.540 18.110 - 18.204: 99.3927% ( 3) 00:12:58.540 18.584 - 18.679: 99.4004% ( 1) 00:12:58.540 18.868 - 18.963: 99.4081% ( 1) 00:12:58.540 3082.619 - 3094.756: 99.4157% ( 1) 00:12:58.540 3252.527 - 3276.800: 99.4234% ( 1) 00:12:58.540 3980.705 - 4004.978: 99.8386% ( 54) 00:12:58.540 4004.978 - 4029.250: 99.9923% ( 20) 00:12:58.540 4975.881 - 5000.154: 100.0000% ( 1) 00:12:58.540 00:12:58.540 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:58.540 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:58.540 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:58.540 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:58.540 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:58.540 [ 00:12:58.540 { 00:12:58.540 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:58.540 "subtype": "Discovery", 00:12:58.540 "listen_addresses": [], 00:12:58.540 "allow_any_host": true, 00:12:58.540 "hosts": [] 00:12:58.540 }, 00:12:58.540 { 00:12:58.540 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:58.540 "subtype": "NVMe", 00:12:58.540 "listen_addresses": [ 00:12:58.540 { 00:12:58.540 "trtype": "VFIOUSER", 00:12:58.540 "adrfam": "IPv4", 00:12:58.540 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:58.540 "trsvcid": "0" 00:12:58.540 } 00:12:58.540 ], 00:12:58.540 "allow_any_host": true, 00:12:58.540 "hosts": [], 00:12:58.540 "serial_number": "SPDK1", 00:12:58.540 "model_number": "SPDK bdev Controller", 00:12:58.540 "max_namespaces": 32, 00:12:58.540 "min_cntlid": 1, 00:12:58.540 "max_cntlid": 65519, 00:12:58.540 "namespaces": [ 00:12:58.540 { 00:12:58.540 "nsid": 1, 00:12:58.540 "bdev_name": "Malloc1", 00:12:58.540 "name": "Malloc1", 00:12:58.540 "nguid": "E33593F8AE0449D39DB0BE04F4B52EA8", 00:12:58.540 "uuid": "e33593f8-ae04-49d3-9db0-be04f4b52ea8" 00:12:58.540 } 00:12:58.540 ] 00:12:58.540 }, 00:12:58.540 { 00:12:58.540 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:58.540 "subtype": "NVMe", 00:12:58.540 "listen_addresses": [ 00:12:58.540 { 00:12:58.540 "trtype": "VFIOUSER", 00:12:58.540 "adrfam": "IPv4", 00:12:58.540 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:58.540 "trsvcid": "0" 00:12:58.540 } 00:12:58.540 ], 00:12:58.540 "allow_any_host": true, 00:12:58.540 "hosts": [], 00:12:58.540 "serial_number": "SPDK2", 00:12:58.540 "model_number": "SPDK bdev Controller", 00:12:58.540 "max_namespaces": 32, 00:12:58.540 "min_cntlid": 1, 00:12:58.540 "max_cntlid": 65519, 00:12:58.540 "namespaces": [ 00:12:58.540 { 00:12:58.540 "nsid": 1, 00:12:58.540 "bdev_name": "Malloc2", 00:12:58.540 "name": "Malloc2", 00:12:58.540 "nguid": "EAA88A5FF01E4297987A7B05F96FA2F8", 00:12:58.540 "uuid": "eaa88a5f-f01e-4297-987a-7b05f96fa2f8" 00:12:58.540 } 00:12:58.540 ] 00:12:58.540 } 00:12:58.540 ] 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3947520 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:58.797 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:58.797 [2024-11-26 20:53:49.658179] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.055 Malloc3 00:12:59.055 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:59.313 [2024-11-26 20:53:50.060134] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.313 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:59.313 Asynchronous Event Request test 00:12:59.313 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.313 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.313 Registering asynchronous event callbacks... 00:12:59.313 Starting namespace attribute notice tests for all controllers... 00:12:59.313 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:59.313 aer_cb - Changed Namespace 00:12:59.313 Cleaning up... 00:12:59.572 [ 00:12:59.572 { 00:12:59.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.572 "subtype": "Discovery", 00:12:59.572 "listen_addresses": [], 00:12:59.572 "allow_any_host": true, 00:12:59.572 "hosts": [] 00:12:59.572 }, 00:12:59.572 { 00:12:59.572 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:59.572 "subtype": "NVMe", 00:12:59.572 "listen_addresses": [ 00:12:59.572 { 00:12:59.572 "trtype": "VFIOUSER", 00:12:59.572 "adrfam": "IPv4", 00:12:59.572 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:59.572 "trsvcid": "0" 00:12:59.572 } 00:12:59.572 ], 00:12:59.572 "allow_any_host": true, 00:12:59.572 "hosts": [], 00:12:59.572 "serial_number": "SPDK1", 00:12:59.572 "model_number": "SPDK bdev Controller", 00:12:59.572 "max_namespaces": 32, 00:12:59.572 "min_cntlid": 1, 00:12:59.572 "max_cntlid": 65519, 00:12:59.572 "namespaces": [ 00:12:59.572 { 00:12:59.572 "nsid": 1, 00:12:59.572 "bdev_name": "Malloc1", 00:12:59.572 "name": "Malloc1", 00:12:59.572 "nguid": "E33593F8AE0449D39DB0BE04F4B52EA8", 00:12:59.572 "uuid": "e33593f8-ae04-49d3-9db0-be04f4b52ea8" 00:12:59.572 }, 00:12:59.572 { 00:12:59.572 "nsid": 2, 00:12:59.572 "bdev_name": "Malloc3", 00:12:59.572 "name": "Malloc3", 00:12:59.572 "nguid": "01504B1F14904D948F033CB40CC656E0", 00:12:59.572 "uuid": "01504b1f-1490-4d94-8f03-3cb40cc656e0" 00:12:59.572 } 00:12:59.572 ] 00:12:59.572 }, 00:12:59.572 { 00:12:59.572 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:59.572 "subtype": "NVMe", 00:12:59.572 "listen_addresses": [ 00:12:59.572 { 00:12:59.573 "trtype": "VFIOUSER", 00:12:59.573 "adrfam": "IPv4", 00:12:59.573 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:59.573 "trsvcid": "0" 00:12:59.573 } 00:12:59.573 ], 00:12:59.573 "allow_any_host": true, 00:12:59.573 "hosts": [], 00:12:59.573 "serial_number": "SPDK2", 00:12:59.573 "model_number": "SPDK bdev Controller", 00:12:59.573 "max_namespaces": 32, 00:12:59.573 "min_cntlid": 1, 00:12:59.573 "max_cntlid": 65519, 00:12:59.573 "namespaces": [ 00:12:59.573 { 00:12:59.573 "nsid": 1, 00:12:59.573 "bdev_name": "Malloc2", 00:12:59.573 "name": "Malloc2", 00:12:59.573 "nguid": "EAA88A5FF01E4297987A7B05F96FA2F8", 00:12:59.573 "uuid": "eaa88a5f-f01e-4297-987a-7b05f96fa2f8" 00:12:59.573 } 00:12:59.573 ] 00:12:59.573 } 00:12:59.573 ] 00:12:59.573 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3947520 00:12:59.573 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:59.573 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:59.573 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:59.573 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:59.573 [2024-11-26 20:53:50.366440] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:12:59.573 [2024-11-26 20:53:50.366482] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947653 ] 00:12:59.573 [2024-11-26 20:53:50.416425] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:59.573 [2024-11-26 20:53:50.425020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:59.573 [2024-11-26 20:53:50.425069] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6fbb6b2000 00:12:59.573 [2024-11-26 20:53:50.426012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.427018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.428020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.429029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.430036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.431037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.432043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.433053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.573 [2024-11-26 20:53:50.434067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:59.573 [2024-11-26 20:53:50.434088] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6fbb6a7000 00:12:59.573 [2024-11-26 20:53:50.435256] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:59.573 [2024-11-26 20:53:50.450210] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:59.573 [2024-11-26 20:53:50.450249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:59.573 [2024-11-26 20:53:50.452330] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:59.573 [2024-11-26 20:53:50.452394] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:59.573 [2024-11-26 20:53:50.452490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:59.573 [2024-11-26 20:53:50.452517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:59.573 [2024-11-26 20:53:50.452527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:59.573 [2024-11-26 20:53:50.453699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:59.573 [2024-11-26 20:53:50.453726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:59.573 [2024-11-26 20:53:50.453740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:59.573 [2024-11-26 20:53:50.454345] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:59.573 [2024-11-26 20:53:50.454382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:59.573 [2024-11-26 20:53:50.454395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.455356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:59.573 [2024-11-26 20:53:50.455379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.457707] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:59.573 [2024-11-26 20:53:50.457729] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:59.573 [2024-11-26 20:53:50.457739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.457751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.457861] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:59.573 [2024-11-26 20:53:50.457869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.457878] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:59.573 [2024-11-26 20:53:50.458381] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:59.573 [2024-11-26 20:53:50.459384] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:59.573 [2024-11-26 20:53:50.460390] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:59.573 [2024-11-26 20:53:50.461385] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.573 [2024-11-26 20:53:50.461454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:59.573 [2024-11-26 20:53:50.462401] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:59.573 [2024-11-26 20:53:50.462426] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:59.573 [2024-11-26 20:53:50.462437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:59.573 [2024-11-26 20:53:50.462461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:59.573 [2024-11-26 20:53:50.462473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:59.573 [2024-11-26 20:53:50.462502] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.573 [2024-11-26 20:53:50.462511] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.573 [2024-11-26 20:53:50.462518] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.573 [2024-11-26 20:53:50.462538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.573 [2024-11-26 20:53:50.468701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:59.573 [2024-11-26 20:53:50.468726] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:59.573 [2024-11-26 20:53:50.468735] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:59.573 [2024-11-26 20:53:50.468742] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:59.573 [2024-11-26 20:53:50.468750] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:59.573 [2024-11-26 20:53:50.468757] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:59.573 [2024-11-26 20:53:50.468765] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:59.573 [2024-11-26 20:53:50.468773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:59.573 [2024-11-26 20:53:50.468786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:59.573 [2024-11-26 20:53:50.468802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:59.573 [2024-11-26 20:53:50.476711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:59.573 [2024-11-26 20:53:50.476739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.573 [2024-11-26 20:53:50.476753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.573 [2024-11-26 20:53:50.476764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.574 [2024-11-26 20:53:50.476776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.574 [2024-11-26 20:53:50.476784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.476802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.476820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:59.574 [2024-11-26 20:53:50.484695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:59.574 [2024-11-26 20:53:50.484715] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:59.574 [2024-11-26 20:53:50.484725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.484743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.484755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.484769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:59.574 [2024-11-26 20:53:50.492698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:59.574 [2024-11-26 20:53:50.492780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.492799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.492814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:59.574 [2024-11-26 20:53:50.492823] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:59.574 [2024-11-26 20:53:50.492829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.574 [2024-11-26 20:53:50.492839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:59.574 [2024-11-26 20:53:50.500697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:59.574 [2024-11-26 20:53:50.500728] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:59.574 [2024-11-26 20:53:50.500765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.500781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.500794] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.574 [2024-11-26 20:53:50.500803] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.574 [2024-11-26 20:53:50.500809] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.574 [2024-11-26 20:53:50.500819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.574 [2024-11-26 20:53:50.508697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:59.574 [2024-11-26 20:53:50.508725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.508741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:59.574 [2024-11-26 20:53:50.508754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.574 [2024-11-26 20:53:50.508768] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.574 [2024-11-26 20:53:50.508775] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.574 [2024-11-26 20:53:50.508785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.516713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.516742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516807] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:59.873 [2024-11-26 20:53:50.516814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:59.873 [2024-11-26 20:53:50.516823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:59.873 [2024-11-26 20:53:50.516851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.524700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.524728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.532698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.532724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.540695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.540722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.548697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.548745] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:59.873 [2024-11-26 20:53:50.548758] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:59.873 [2024-11-26 20:53:50.548764] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:59.873 [2024-11-26 20:53:50.548770] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:59.873 [2024-11-26 20:53:50.548776] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:59.873 [2024-11-26 20:53:50.548790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:59.873 [2024-11-26 20:53:50.548804] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:59.873 [2024-11-26 20:53:50.548813] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:59.873 [2024-11-26 20:53:50.548819] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.873 [2024-11-26 20:53:50.548828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.548839] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:59.873 [2024-11-26 20:53:50.548848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.873 [2024-11-26 20:53:50.548853] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.873 [2024-11-26 20:53:50.548862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.548875] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:59.873 [2024-11-26 20:53:50.548884] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:59.873 [2024-11-26 20:53:50.548890] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.873 [2024-11-26 20:53:50.548899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:59.873 [2024-11-26 20:53:50.556703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.556732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.556767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:59.873 [2024-11-26 20:53:50.556779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:59.873 ===================================================== 00:12:59.873 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.873 ===================================================== 00:12:59.873 Controller Capabilities/Features 00:12:59.873 ================================ 00:12:59.873 Vendor ID: 4e58 00:12:59.873 Subsystem Vendor ID: 4e58 00:12:59.873 Serial Number: SPDK2 00:12:59.873 Model Number: SPDK bdev Controller 00:12:59.873 Firmware Version: 25.01 00:12:59.873 Recommended Arb Burst: 6 00:12:59.874 IEEE OUI Identifier: 8d 6b 50 00:12:59.874 Multi-path I/O 00:12:59.874 May have multiple subsystem ports: Yes 00:12:59.874 May have multiple controllers: Yes 00:12:59.874 Associated with SR-IOV VF: No 00:12:59.874 Max Data Transfer Size: 131072 00:12:59.874 Max Number of Namespaces: 32 00:12:59.874 Max Number of I/O Queues: 127 00:12:59.874 NVMe Specification Version (VS): 1.3 00:12:59.874 NVMe Specification Version (Identify): 1.3 00:12:59.874 Maximum Queue Entries: 256 00:12:59.874 Contiguous Queues Required: Yes 00:12:59.874 Arbitration Mechanisms Supported 00:12:59.874 Weighted Round Robin: Not Supported 00:12:59.874 Vendor Specific: Not Supported 00:12:59.874 Reset Timeout: 15000 ms 00:12:59.874 Doorbell Stride: 4 bytes 00:12:59.874 NVM Subsystem Reset: Not Supported 00:12:59.874 Command Sets Supported 00:12:59.874 NVM Command Set: Supported 00:12:59.874 Boot Partition: Not Supported 00:12:59.874 Memory Page Size Minimum: 4096 bytes 00:12:59.874 Memory Page Size Maximum: 4096 bytes 00:12:59.874 Persistent Memory Region: Not Supported 00:12:59.874 Optional Asynchronous Events Supported 00:12:59.874 Namespace Attribute Notices: Supported 00:12:59.874 Firmware Activation Notices: Not Supported 00:12:59.874 ANA Change Notices: Not Supported 00:12:59.874 PLE Aggregate Log Change Notices: Not Supported 00:12:59.874 LBA Status Info Alert Notices: Not Supported 00:12:59.874 EGE Aggregate Log Change Notices: Not Supported 00:12:59.874 Normal NVM Subsystem Shutdown event: Not Supported 00:12:59.874 Zone Descriptor Change Notices: Not Supported 00:12:59.874 Discovery Log Change Notices: Not Supported 00:12:59.874 Controller Attributes 00:12:59.874 128-bit Host Identifier: Supported 00:12:59.874 Non-Operational Permissive Mode: Not Supported 00:12:59.874 NVM Sets: Not Supported 00:12:59.874 Read Recovery Levels: Not Supported 00:12:59.874 Endurance Groups: Not Supported 00:12:59.874 Predictable Latency Mode: Not Supported 00:12:59.874 Traffic Based Keep ALive: Not Supported 00:12:59.874 Namespace Granularity: Not Supported 00:12:59.874 SQ Associations: Not Supported 00:12:59.874 UUID List: Not Supported 00:12:59.874 Multi-Domain Subsystem: Not Supported 00:12:59.874 Fixed Capacity Management: Not Supported 00:12:59.874 Variable Capacity Management: Not Supported 00:12:59.874 Delete Endurance Group: Not Supported 00:12:59.874 Delete NVM Set: Not Supported 00:12:59.874 Extended LBA Formats Supported: Not Supported 00:12:59.874 Flexible Data Placement Supported: Not Supported 00:12:59.874 00:12:59.874 Controller Memory Buffer Support 00:12:59.874 ================================ 00:12:59.874 Supported: No 00:12:59.874 00:12:59.874 Persistent Memory Region Support 00:12:59.874 ================================ 00:12:59.874 Supported: No 00:12:59.874 00:12:59.874 Admin Command Set Attributes 00:12:59.874 ============================ 00:12:59.874 Security Send/Receive: Not Supported 00:12:59.874 Format NVM: Not Supported 00:12:59.874 Firmware Activate/Download: Not Supported 00:12:59.874 Namespace Management: Not Supported 00:12:59.874 Device Self-Test: Not Supported 00:12:59.874 Directives: Not Supported 00:12:59.874 NVMe-MI: Not Supported 00:12:59.874 Virtualization Management: Not Supported 00:12:59.874 Doorbell Buffer Config: Not Supported 00:12:59.874 Get LBA Status Capability: Not Supported 00:12:59.874 Command & Feature Lockdown Capability: Not Supported 00:12:59.874 Abort Command Limit: 4 00:12:59.874 Async Event Request Limit: 4 00:12:59.874 Number of Firmware Slots: N/A 00:12:59.874 Firmware Slot 1 Read-Only: N/A 00:12:59.874 Firmware Activation Without Reset: N/A 00:12:59.874 Multiple Update Detection Support: N/A 00:12:59.874 Firmware Update Granularity: No Information Provided 00:12:59.874 Per-Namespace SMART Log: No 00:12:59.874 Asymmetric Namespace Access Log Page: Not Supported 00:12:59.874 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:59.874 Command Effects Log Page: Supported 00:12:59.874 Get Log Page Extended Data: Supported 00:12:59.874 Telemetry Log Pages: Not Supported 00:12:59.874 Persistent Event Log Pages: Not Supported 00:12:59.874 Supported Log Pages Log Page: May Support 00:12:59.874 Commands Supported & Effects Log Page: Not Supported 00:12:59.874 Feature Identifiers & Effects Log Page:May Support 00:12:59.874 NVMe-MI Commands & Effects Log Page: May Support 00:12:59.874 Data Area 4 for Telemetry Log: Not Supported 00:12:59.874 Error Log Page Entries Supported: 128 00:12:59.874 Keep Alive: Supported 00:12:59.874 Keep Alive Granularity: 10000 ms 00:12:59.874 00:12:59.874 NVM Command Set Attributes 00:12:59.874 ========================== 00:12:59.874 Submission Queue Entry Size 00:12:59.874 Max: 64 00:12:59.874 Min: 64 00:12:59.874 Completion Queue Entry Size 00:12:59.874 Max: 16 00:12:59.874 Min: 16 00:12:59.874 Number of Namespaces: 32 00:12:59.874 Compare Command: Supported 00:12:59.874 Write Uncorrectable Command: Not Supported 00:12:59.874 Dataset Management Command: Supported 00:12:59.874 Write Zeroes Command: Supported 00:12:59.874 Set Features Save Field: Not Supported 00:12:59.874 Reservations: Not Supported 00:12:59.874 Timestamp: Not Supported 00:12:59.874 Copy: Supported 00:12:59.874 Volatile Write Cache: Present 00:12:59.874 Atomic Write Unit (Normal): 1 00:12:59.874 Atomic Write Unit (PFail): 1 00:12:59.874 Atomic Compare & Write Unit: 1 00:12:59.874 Fused Compare & Write: Supported 00:12:59.874 Scatter-Gather List 00:12:59.874 SGL Command Set: Supported (Dword aligned) 00:12:59.874 SGL Keyed: Not Supported 00:12:59.874 SGL Bit Bucket Descriptor: Not Supported 00:12:59.874 SGL Metadata Pointer: Not Supported 00:12:59.874 Oversized SGL: Not Supported 00:12:59.874 SGL Metadata Address: Not Supported 00:12:59.874 SGL Offset: Not Supported 00:12:59.874 Transport SGL Data Block: Not Supported 00:12:59.874 Replay Protected Memory Block: Not Supported 00:12:59.874 00:12:59.874 Firmware Slot Information 00:12:59.874 ========================= 00:12:59.874 Active slot: 1 00:12:59.874 Slot 1 Firmware Revision: 25.01 00:12:59.874 00:12:59.874 00:12:59.874 Commands Supported and Effects 00:12:59.874 ============================== 00:12:59.874 Admin Commands 00:12:59.874 -------------- 00:12:59.874 Get Log Page (02h): Supported 00:12:59.874 Identify (06h): Supported 00:12:59.874 Abort (08h): Supported 00:12:59.874 Set Features (09h): Supported 00:12:59.874 Get Features (0Ah): Supported 00:12:59.874 Asynchronous Event Request (0Ch): Supported 00:12:59.874 Keep Alive (18h): Supported 00:12:59.874 I/O Commands 00:12:59.874 ------------ 00:12:59.874 Flush (00h): Supported LBA-Change 00:12:59.874 Write (01h): Supported LBA-Change 00:12:59.874 Read (02h): Supported 00:12:59.874 Compare (05h): Supported 00:12:59.874 Write Zeroes (08h): Supported LBA-Change 00:12:59.874 Dataset Management (09h): Supported LBA-Change 00:12:59.874 Copy (19h): Supported LBA-Change 00:12:59.874 00:12:59.874 Error Log 00:12:59.874 ========= 00:12:59.874 00:12:59.874 Arbitration 00:12:59.874 =========== 00:12:59.874 Arbitration Burst: 1 00:12:59.874 00:12:59.874 Power Management 00:12:59.874 ================ 00:12:59.874 Number of Power States: 1 00:12:59.874 Current Power State: Power State #0 00:12:59.874 Power State #0: 00:12:59.874 Max Power: 0.00 W 00:12:59.874 Non-Operational State: Operational 00:12:59.874 Entry Latency: Not Reported 00:12:59.874 Exit Latency: Not Reported 00:12:59.874 Relative Read Throughput: 0 00:12:59.874 Relative Read Latency: 0 00:12:59.874 Relative Write Throughput: 0 00:12:59.874 Relative Write Latency: 0 00:12:59.874 Idle Power: Not Reported 00:12:59.874 Active Power: Not Reported 00:12:59.874 Non-Operational Permissive Mode: Not Supported 00:12:59.874 00:12:59.874 Health Information 00:12:59.874 ================== 00:12:59.874 Critical Warnings: 00:12:59.874 Available Spare Space: OK 00:12:59.874 Temperature: OK 00:12:59.874 Device Reliability: OK 00:12:59.874 Read Only: No 00:12:59.874 Volatile Memory Backup: OK 00:12:59.874 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:59.874 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:59.874 Available Spare: 0% 00:12:59.874 Available Sp[2024-11-26 20:53:50.556908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:59.874 [2024-11-26 20:53:50.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:59.874 [2024-11-26 20:53:50.564782] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:59.874 [2024-11-26 20:53:50.564802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.875 [2024-11-26 20:53:50.564813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.875 [2024-11-26 20:53:50.564823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.875 [2024-11-26 20:53:50.564833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.875 [2024-11-26 20:53:50.564925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:59.875 [2024-11-26 20:53:50.564949] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:59.875 [2024-11-26 20:53:50.565929] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.875 [2024-11-26 20:53:50.566017] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:59.875 [2024-11-26 20:53:50.566051] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:59.875 [2024-11-26 20:53:50.566937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:59.875 [2024-11-26 20:53:50.566977] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:59.875 [2024-11-26 20:53:50.567046] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:59.875 [2024-11-26 20:53:50.569697] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:59.875 are Threshold: 0% 00:12:59.875 Life Percentage Used: 0% 00:12:59.875 Data Units Read: 0 00:12:59.875 Data Units Written: 0 00:12:59.875 Host Read Commands: 0 00:12:59.875 Host Write Commands: 0 00:12:59.875 Controller Busy Time: 0 minutes 00:12:59.875 Power Cycles: 0 00:12:59.875 Power On Hours: 0 hours 00:12:59.875 Unsafe Shutdowns: 0 00:12:59.875 Unrecoverable Media Errors: 0 00:12:59.875 Lifetime Error Log Entries: 0 00:12:59.875 Warning Temperature Time: 0 minutes 00:12:59.875 Critical Temperature Time: 0 minutes 00:12:59.875 00:12:59.875 Number of Queues 00:12:59.875 ================ 00:12:59.875 Number of I/O Submission Queues: 127 00:12:59.875 Number of I/O Completion Queues: 127 00:12:59.875 00:12:59.875 Active Namespaces 00:12:59.875 ================= 00:12:59.875 Namespace ID:1 00:12:59.875 Error Recovery Timeout: Unlimited 00:12:59.875 Command Set Identifier: NVM (00h) 00:12:59.875 Deallocate: Supported 00:12:59.875 Deallocated/Unwritten Error: Not Supported 00:12:59.875 Deallocated Read Value: Unknown 00:12:59.875 Deallocate in Write Zeroes: Not Supported 00:12:59.875 Deallocated Guard Field: 0xFFFF 00:12:59.875 Flush: Supported 00:12:59.875 Reservation: Supported 00:12:59.875 Namespace Sharing Capabilities: Multiple Controllers 00:12:59.875 Size (in LBAs): 131072 (0GiB) 00:12:59.875 Capacity (in LBAs): 131072 (0GiB) 00:12:59.875 Utilization (in LBAs): 131072 (0GiB) 00:12:59.875 NGUID: EAA88A5FF01E4297987A7B05F96FA2F8 00:12:59.875 UUID: eaa88a5f-f01e-4297-987a-7b05f96fa2f8 00:12:59.875 Thin Provisioning: Not Supported 00:12:59.875 Per-NS Atomic Units: Yes 00:12:59.875 Atomic Boundary Size (Normal): 0 00:12:59.875 Atomic Boundary Size (PFail): 0 00:12:59.875 Atomic Boundary Offset: 0 00:12:59.875 Maximum Single Source Range Length: 65535 00:12:59.875 Maximum Copy Length: 65535 00:12:59.875 Maximum Source Range Count: 1 00:12:59.875 NGUID/EUI64 Never Reused: No 00:12:59.875 Namespace Write Protected: No 00:12:59.875 Number of LBA Formats: 1 00:12:59.875 Current LBA Format: LBA Format #00 00:12:59.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:59.875 00:12:59.875 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:00.165 [2024-11-26 20:53:50.818589] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.432 Initializing NVMe Controllers 00:13:05.432 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.432 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:05.432 Initialization complete. Launching workers. 00:13:05.432 ======================================================== 00:13:05.432 Latency(us) 00:13:05.432 Device Information : IOPS MiB/s Average min max 00:13:05.432 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31852.32 124.42 4017.99 1217.26 8197.83 00:13:05.432 ======================================================== 00:13:05.432 Total : 31852.32 124.42 4017.99 1217.26 8197.83 00:13:05.432 00:13:05.432 [2024-11-26 20:53:55.917128] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.432 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:05.432 [2024-11-26 20:53:56.183756] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.701 Initializing NVMe Controllers 00:13:10.701 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:10.701 Initialization complete. Launching workers. 00:13:10.701 ======================================================== 00:13:10.701 Latency(us) 00:13:10.701 Device Information : IOPS MiB/s Average min max 00:13:10.701 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30208.72 118.00 4236.50 1234.40 9377.21 00:13:10.701 ======================================================== 00:13:10.701 Total : 30208.72 118.00 4236.50 1234.40 9377.21 00:13:10.701 00:13:10.701 [2024-11-26 20:54:01.206031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.701 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:10.701 [2024-11-26 20:54:01.429851] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.979 [2024-11-26 20:54:06.562858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.979 Initializing NVMe Controllers 00:13:15.979 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.979 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:15.979 Initialization complete. Launching workers. 00:13:15.979 Starting thread on core 2 00:13:15.979 Starting thread on core 3 00:13:15.979 Starting thread on core 1 00:13:15.979 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:15.979 [2024-11-26 20:54:06.888128] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.270 [2024-11-26 20:54:09.940009] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.270 Initializing NVMe Controllers 00:13:19.270 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.270 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:19.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:19.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:19.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:19.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:19.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:19.270 Initialization complete. Launching workers. 00:13:19.270 Starting thread on core 1 with urgent priority queue 00:13:19.270 Starting thread on core 2 with urgent priority queue 00:13:19.270 Starting thread on core 3 with urgent priority queue 00:13:19.270 Starting thread on core 0 with urgent priority queue 00:13:19.270 SPDK bdev Controller (SPDK2 ) core 0: 5534.33 IO/s 18.07 secs/100000 ios 00:13:19.270 SPDK bdev Controller (SPDK2 ) core 1: 5969.33 IO/s 16.75 secs/100000 ios 00:13:19.270 SPDK bdev Controller (SPDK2 ) core 2: 5469.33 IO/s 18.28 secs/100000 ios 00:13:19.270 SPDK bdev Controller (SPDK2 ) core 3: 5830.00 IO/s 17.15 secs/100000 ios 00:13:19.270 ======================================================== 00:13:19.270 00:13:19.270 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:19.529 [2024-11-26 20:54:10.267278] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.529 Initializing NVMe Controllers 00:13:19.529 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.529 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:19.529 Namespace ID: 1 size: 0GB 00:13:19.529 Initialization complete. 00:13:19.529 INFO: using host memory buffer for IO 00:13:19.529 Hello world! 00:13:19.529 [2024-11-26 20:54:10.275416] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.529 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:19.790 [2024-11-26 20:54:10.591142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.169 Initializing NVMe Controllers 00:13:21.169 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.169 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.169 Initialization complete. Launching workers. 00:13:21.169 submit (in ns) avg, min, max = 6350.8, 3485.6, 4002491.1 00:13:21.169 complete (in ns) avg, min, max = 26756.8, 2060.0, 4015891.1 00:13:21.169 00:13:21.169 Submit histogram 00:13:21.169 ================ 00:13:21.169 Range in us Cumulative Count 00:13:21.169 3.484 - 3.508: 0.6215% ( 80) 00:13:21.169 3.508 - 3.532: 1.7868% ( 150) 00:13:21.169 3.532 - 3.556: 5.3449% ( 458) 00:13:21.169 3.556 - 3.579: 11.2803% ( 764) 00:13:21.169 3.579 - 3.603: 21.4186% ( 1305) 00:13:21.169 3.603 - 3.627: 30.9742% ( 1230) 00:13:21.169 3.627 - 3.650: 40.6386% ( 1244) 00:13:21.169 3.650 - 3.674: 47.3741% ( 867) 00:13:21.169 3.674 - 3.698: 54.5758% ( 927) 00:13:21.169 3.698 - 3.721: 60.5112% ( 764) 00:13:21.169 3.721 - 3.745: 65.3822% ( 627) 00:13:21.169 3.745 - 3.769: 69.3987% ( 517) 00:13:21.169 3.769 - 3.793: 72.5606% ( 407) 00:13:21.169 3.793 - 3.816: 75.6448% ( 397) 00:13:21.169 3.816 - 3.840: 79.0942% ( 444) 00:13:21.169 3.840 - 3.864: 82.4037% ( 426) 00:13:21.169 3.864 - 3.887: 85.3869% ( 384) 00:13:21.169 3.887 - 3.911: 87.4922% ( 271) 00:13:21.169 3.911 - 3.935: 89.2946% ( 232) 00:13:21.169 3.935 - 3.959: 90.9727% ( 216) 00:13:21.169 3.959 - 3.982: 92.5264% ( 200) 00:13:21.169 3.982 - 4.006: 94.0258% ( 193) 00:13:21.169 4.006 - 4.030: 94.9969% ( 125) 00:13:21.169 4.030 - 4.053: 95.7349% ( 95) 00:13:21.169 4.053 - 4.077: 96.2244% ( 63) 00:13:21.169 4.077 - 4.101: 96.5895% ( 47) 00:13:21.169 4.101 - 4.124: 96.8536% ( 34) 00:13:21.169 4.124 - 4.148: 96.9935% ( 18) 00:13:21.169 4.148 - 4.172: 97.1255% ( 17) 00:13:21.169 4.172 - 4.196: 97.2343% ( 14) 00:13:21.169 4.196 - 4.219: 97.2732% ( 5) 00:13:21.169 4.219 - 4.243: 97.3508% ( 10) 00:13:21.169 4.243 - 4.267: 97.4518% ( 13) 00:13:21.169 4.267 - 4.290: 97.5140% ( 8) 00:13:21.169 4.290 - 4.314: 97.5295% ( 2) 00:13:21.169 4.314 - 4.338: 97.5528% ( 3) 00:13:21.169 4.338 - 4.361: 97.5917% ( 5) 00:13:21.169 4.361 - 4.385: 97.6227% ( 4) 00:13:21.169 4.456 - 4.480: 97.6305% ( 1) 00:13:21.169 4.480 - 4.504: 97.6538% ( 3) 00:13:21.169 4.504 - 4.527: 97.6616% ( 1) 00:13:21.169 4.575 - 4.599: 97.6694% ( 1) 00:13:21.169 4.693 - 4.717: 97.6771% ( 1) 00:13:21.169 4.741 - 4.764: 97.6849% ( 1) 00:13:21.169 4.764 - 4.788: 97.7082% ( 3) 00:13:21.169 4.788 - 4.812: 97.7626% ( 7) 00:13:21.169 4.812 - 4.836: 97.8014% ( 5) 00:13:21.169 4.836 - 4.859: 97.8403% ( 5) 00:13:21.169 4.859 - 4.883: 97.8869% ( 6) 00:13:21.169 4.883 - 4.907: 97.9413% ( 7) 00:13:21.169 4.907 - 4.930: 97.9879% ( 6) 00:13:21.169 4.930 - 4.954: 98.0656% ( 10) 00:13:21.169 4.954 - 4.978: 98.1200% ( 7) 00:13:21.169 4.978 - 5.001: 98.1743% ( 7) 00:13:21.169 5.001 - 5.025: 98.1976% ( 3) 00:13:21.169 5.025 - 5.049: 98.2831% ( 11) 00:13:21.169 5.049 - 5.073: 98.3219% ( 5) 00:13:21.169 5.073 - 5.096: 98.3608% ( 5) 00:13:21.169 5.096 - 5.120: 98.3919% ( 4) 00:13:21.169 5.120 - 5.144: 98.4074% ( 2) 00:13:21.169 5.167 - 5.191: 98.4385% ( 4) 00:13:21.169 5.191 - 5.215: 98.4462% ( 1) 00:13:21.169 5.215 - 5.239: 98.4618% ( 2) 00:13:21.169 5.262 - 5.286: 98.4773% ( 2) 00:13:21.169 5.286 - 5.310: 98.4851% ( 1) 00:13:21.169 5.310 - 5.333: 98.4929% ( 1) 00:13:21.169 5.452 - 5.476: 98.5006% ( 1) 00:13:21.169 5.499 - 5.523: 98.5084% ( 1) 00:13:21.169 5.689 - 5.713: 98.5239% ( 2) 00:13:21.169 5.713 - 5.736: 98.5317% ( 1) 00:13:21.169 5.784 - 5.807: 98.5395% ( 1) 00:13:21.169 5.807 - 5.831: 98.5472% ( 1) 00:13:21.169 5.855 - 5.879: 98.5550% ( 1) 00:13:21.169 5.926 - 5.950: 98.5628% ( 1) 00:13:21.169 5.973 - 5.997: 98.5705% ( 1) 00:13:21.169 6.044 - 6.068: 98.5783% ( 1) 00:13:21.169 6.068 - 6.116: 98.5861% ( 1) 00:13:21.169 6.210 - 6.258: 98.5938% ( 1) 00:13:21.169 6.258 - 6.305: 98.6172% ( 3) 00:13:21.169 6.305 - 6.353: 98.6327% ( 2) 00:13:21.169 6.684 - 6.732: 98.6405% ( 1) 00:13:21.169 6.827 - 6.874: 98.6560% ( 2) 00:13:21.169 6.874 - 6.921: 98.6638% ( 1) 00:13:21.169 6.921 - 6.969: 98.6715% ( 1) 00:13:21.169 6.969 - 7.016: 98.6793% ( 1) 00:13:21.169 7.064 - 7.111: 98.6948% ( 2) 00:13:21.169 7.159 - 7.206: 98.7181% ( 3) 00:13:21.169 7.206 - 7.253: 98.7337% ( 2) 00:13:21.169 7.253 - 7.301: 98.7415% ( 1) 00:13:21.169 7.301 - 7.348: 98.7570% ( 2) 00:13:21.169 7.443 - 7.490: 98.7725% ( 2) 00:13:21.169 7.490 - 7.538: 98.7881% ( 2) 00:13:21.169 7.538 - 7.585: 98.7958% ( 1) 00:13:21.169 7.585 - 7.633: 98.8036% ( 1) 00:13:21.169 7.680 - 7.727: 98.8114% ( 1) 00:13:21.169 7.727 - 7.775: 98.8269% ( 2) 00:13:21.169 7.775 - 7.822: 98.8502% ( 3) 00:13:21.169 7.822 - 7.870: 98.8580% ( 1) 00:13:21.169 7.870 - 7.917: 98.8735% ( 2) 00:13:21.169 7.917 - 7.964: 98.8813% ( 1) 00:13:21.169 7.964 - 8.012: 98.8891% ( 1) 00:13:21.169 8.012 - 8.059: 98.9046% ( 2) 00:13:21.169 8.059 - 8.107: 98.9124% ( 1) 00:13:21.169 8.107 - 8.154: 98.9279% ( 2) 00:13:21.169 8.249 - 8.296: 98.9434% ( 2) 00:13:21.169 8.344 - 8.391: 98.9512% ( 1) 00:13:21.169 8.439 - 8.486: 98.9590% ( 1) 00:13:21.169 8.486 - 8.533: 98.9745% ( 2) 00:13:21.169 8.628 - 8.676: 98.9901% ( 2) 00:13:21.169 8.676 - 8.723: 98.9978% ( 1) 00:13:21.169 8.770 - 8.818: 99.0056% ( 1) 00:13:21.169 8.865 - 8.913: 99.0134% ( 1) 00:13:21.169 8.960 - 9.007: 99.0211% ( 1) 00:13:21.169 9.007 - 9.055: 99.0289% ( 1) 00:13:21.169 9.150 - 9.197: 99.0367% ( 1) 00:13:21.169 9.339 - 9.387: 99.0444% ( 1) 00:13:21.169 9.387 - 9.434: 99.0522% ( 1) 00:13:21.169 9.956 - 10.003: 99.0677% ( 2) 00:13:21.169 10.240 - 10.287: 99.0755% ( 1) 00:13:21.169 10.714 - 10.761: 99.0833% ( 1) 00:13:21.169 10.809 - 10.856: 99.0911% ( 1) 00:13:21.169 11.093 - 11.141: 99.0988% ( 1) 00:13:21.169 11.425 - 11.473: 99.1066% ( 1) 00:13:21.169 11.662 - 11.710: 99.1144% ( 1) 00:13:21.169 11.899 - 11.947: 99.1221% ( 1) 00:13:21.169 11.947 - 11.994: 99.1299% ( 1) 00:13:21.169 12.326 - 12.421: 99.1377% ( 1) 00:13:21.169 12.421 - 12.516: 99.1454% ( 1) 00:13:21.169 12.516 - 12.610: 99.1532% ( 1) 00:13:21.169 13.274 - 13.369: 99.1610% ( 1) 00:13:21.169 16.972 - 17.067: 99.1687% ( 1) 00:13:21.169 17.067 - 17.161: 99.1765% ( 1) 00:13:21.169 17.161 - 17.256: 99.1843% ( 1) 00:13:21.169 17.256 - 17.351: 99.1998% ( 2) 00:13:21.169 17.351 - 17.446: 99.2387% ( 5) 00:13:21.169 17.446 - 17.541: 99.2542% ( 2) 00:13:21.169 17.541 - 17.636: 99.2930% ( 5) 00:13:21.169 17.636 - 17.730: 99.3241% ( 4) 00:13:21.169 17.730 - 17.825: 99.3940% ( 9) 00:13:21.169 17.825 - 17.920: 99.4562% ( 8) 00:13:21.169 17.920 - 18.015: 99.5183% ( 8) 00:13:21.169 18.015 - 18.110: 99.5727% ( 7) 00:13:21.169 18.110 - 18.204: 99.6426% ( 9) 00:13:21.169 18.204 - 18.299: 99.6892% ( 6) 00:13:21.169 18.299 - 18.394: 99.7514% ( 8) 00:13:21.169 18.394 - 18.489: 99.7902% ( 5) 00:13:21.169 18.489 - 18.584: 99.8369% ( 6) 00:13:21.169 18.584 - 18.679: 99.8602% ( 3) 00:13:21.169 18.679 - 18.773: 99.8757% ( 2) 00:13:21.169 18.963 - 19.058: 99.9068% ( 4) 00:13:21.169 19.058 - 19.153: 99.9223% ( 2) 00:13:21.169 20.385 - 20.480: 99.9301% ( 1) 00:13:21.169 25.031 - 25.221: 99.9378% ( 1) 00:13:21.169 3980.705 - 4004.978: 100.0000% ( 8) 00:13:21.169 00:13:21.169 Complete histogram 00:13:21.169 ================== 00:13:21.169 Range in us Cumulative Count 00:13:21.169 2.050 - 2.062: 0.1088% ( 14) 00:13:21.169 2.062 - 2.074: 24.4795% ( 3137) 00:13:21.169 2.074 - 2.086: 43.6995% ( 2474) 00:13:21.169 2.086 - 2.098: 45.5485% ( 238) 00:13:21.169 2.098 - 2.110: 58.0329% ( 1607) 00:13:21.169 2.110 - 2.121: 62.0416% ( 516) 00:13:21.169 2.121 - 2.133: 64.5665% ( 325) 00:13:21.169 2.133 - 2.145: 78.0298% ( 1733) 00:13:21.169 2.145 - 2.157: 82.2405% ( 542) 00:13:21.169 2.157 - 2.169: 84.1750% ( 249) 00:13:21.169 2.169 - 2.181: 88.2303% ( 522) 00:13:21.169 2.181 - 2.193: 89.5898% ( 175) 00:13:21.169 2.193 - 2.204: 90.2735% ( 88) 00:13:21.169 2.204 - 2.216: 91.3844% ( 143) 00:13:21.169 2.216 - 2.228: 92.8294% ( 186) 00:13:21.169 2.228 - 2.240: 94.3210% ( 192) 00:13:21.170 2.240 - 2.252: 94.9037% ( 75) 00:13:21.170 2.252 - 2.264: 95.0124% ( 14) 00:13:21.170 2.264 - 2.276: 95.1212% ( 14) 00:13:21.170 2.276 - 2.287: 95.2688% ( 19) 00:13:21.170 2.287 - 2.299: 95.5019% ( 30) 00:13:21.170 2.299 - 2.311: 95.7738% ( 35) 00:13:21.170 2.311 - 2.323: 95.8825% ( 14) 00:13:21.170 2.323 - 2.335: 95.8981% ( 2) 00:13:21.170 2.335 - 2.347: 95.9214% ( 3) 00:13:21.170 2.347 - 2.359: 95.9602% ( 5) 00:13:21.170 2.359 - 2.370: 96.0690% ( 14) 00:13:21.170 2.370 - 2.382: 96.2321% ( 21) 00:13:21.170 2.382 - 2.394: 96.5429% ( 40) 00:13:21.170 2.394 - 2.406: 96.8303% ( 37) 00:13:21.170 2.406 - 2.418: 97.1566% ( 42) 00:13:21.170 2.418 - 2.430: 97.3975% ( 31) 00:13:21.170 2.430 - 2.441: 97.5761% ( 23) 00:13:21.170 2.441 - 2.453: 97.7626% ( 24) 00:13:21.170 2.453 - 2.465: 97.8791% ( 15) 00:13:21.170 2.465 - 2.477: 98.0267% ( 19) 00:13:21.170 2.477 - 2.489: 98.1122% ( 11) 00:13:21.170 2.489 - 2.501: 98.2287% ( 15) 00:13:21.170 2.501 - 2.513: 98.2831% ( 7) 00:13:21.170 2.513 - 2.524: 98.3064% ( 3) 00:13:21.170 2.524 - 2.536: 98.3142% ( 1) 00:13:21.170 2.536 - 2.548: 98.3452% ( 4) 00:13:21.170 2.548 - 2.560: 98.3608% ( 2) 00:13:21.170 2.560 - 2.572: 98.3841% ( 3) 00:13:21.170 2.572 - 2.584: 98.3919% ( 1) 00:13:21.170 2.584 - 2.596: 98.3996% ( 1) 00:13:21.170 2.631 - 2.643: 98.4152% ( 2) 00:13:21.170 2.809 - 2.821: 98.4229% ( 1) 00:13:21.170 2.833 - 2.844: 98.4307% ( 1) 00:13:21.170 2.868 - 2.880: 98.4385% ( 1) 00:13:21.170 3.366 - 3.390: 98.4462% ( 1) 00:13:21.170 3.390 - 3.413: 98.4540% ( 1) 00:13:21.170 3.461 - 3.484: 98.4695% ( 2) 00:13:21.170 3.508 - 3.532: 98.5006% ( 4) 00:13:21.170 3.556 - 3.579: 98.5084% ( 1) 00:13:21.170 3.579 - 3.603: 98.5317% ( 3) 00:13:21.170 3.627 - 3.650: 98.5472% ( 2) 00:13:21.170 3.698 - 3.721: 98.5550% ( 1) 00:13:21.170 3.721 - 3.745: 98.5628% ( 1) 00:13:21.170 3.745 - 3.769: 98.5783% ( 2) 00:13:21.170 3.769 - 3.793: 98.5938% ( 2) 00:13:21.170 3.793 - 3.816: 98.6016% ( 1) 00:13:21.170 3.816 - 3.840: 98.6172% ( 2) 00:13:21.170 3.864 - 3.887: 98.6405% ( 3) 00:13:21.170 3.911 - 3.935: 98.6638% ( 3) 00:13:21.170 4.124 - 4.148: 98.6793% ( 2) 00:13:21.170 4.148 - 4.172: 98.6948% ( 2) 00:13:21.170 4.196 - 4.219: 98.7026% ( 1) 00:13:21.170 4.527 - 4.551: 98.7104% ( 1) 00:13:21.170 4.717 - 4.741: 98.7181% ( 1) 00:13:21.170 5.310 - 5.333: 98.7259% ( 1) 00:13:21.170 5.665 - 5.689: 98.7337% ( 1) 00:13:21.170 5.689 - 5.713: 98.7415% ( 1) 00:13:21.170 5.736 - 5.760: 98.7492% ( 1) 00:13:21.170 5.760 - 5.784: 98.7570% ( 1) 00:13:21.170 5.973 - 5.997: 98.7648% ( 1) 00:13:21.170 6.068 - 6.116: 98.7725% ( 1) 00:13:21.170 6.163 - 6.210: 98.7803% ( 1) 00:13:21.170 6.542 - 6.590: 98.7958% ( 2) 00:13:21.170 6.590 - 6.637: 98.8036% ( 1) 00:13:21.170 6.732 - 6.779: 98.8114% ( 1) 00:13:21.170 6.827 - 6.874: 98.8191% ( 1) 00:13:21.170 7.206 - 7.253: 98.8269% ( 1) 00:13:21.170 7.680 - 7.727: 98.8347% ( 1) 00:13:21.170 7.917 - 7.964: 98.8424% ( 1) 00:13:21.170 9.055 - 9.102: 98.8502% ( 1) 00:13:21.170 15.360 - 15.455: 98.8580% ( 1) 00:13:21.170 15.455 - 15.550: 98.8658% ( 1) 00:13:21.170 15.550 - 15.644: 98.8735% ( 1) 00:13:21.170 15.644 - 15.739: 98.8891% ( 2) 00:13:21.170 15.739 - 15.834: 98.8968% ( 1) 00:13:21.170 15.834 - 15.929: 98.9124% ( 2) 00:13:21.170 15.929 - 16.024: 98.9357% ( 3) 00:13:21.170 16.024 - 16.119: 98.9823% ( 6) 00:13:21.170 16.213 - 16.308: 99.0134% ( 4) 00:13:21.170 16.308 - 16.403: 99.0522% ( 5) 00:13:21.170 16.403 - 16.498: 99.0833% ( 4) 00:13:21.170 16.498 - 16.593: 9[2024-11-26 20:54:11.685388] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.170 9.1144% ( 4) 00:13:21.170 16.593 - 16.687: 99.1610% ( 6) 00:13:21.170 16.687 - 16.782: 99.2387% ( 10) 00:13:21.170 16.782 - 16.877: 99.2775% ( 5) 00:13:21.170 16.877 - 16.972: 99.2853% ( 1) 00:13:21.170 16.972 - 17.067: 99.2930% ( 1) 00:13:21.170 17.067 - 17.161: 99.3163% ( 3) 00:13:21.170 17.161 - 17.256: 99.3241% ( 1) 00:13:21.170 17.351 - 17.446: 99.3397% ( 2) 00:13:21.170 17.541 - 17.636: 99.3474% ( 1) 00:13:21.170 18.110 - 18.204: 99.3552% ( 1) 00:13:21.170 18.299 - 18.394: 99.3630% ( 1) 00:13:21.170 20.101 - 20.196: 99.3707% ( 1) 00:13:21.170 20.385 - 20.480: 99.3785% ( 1) 00:13:21.170 35.271 - 35.461: 99.3863% ( 1) 00:13:21.170 3980.705 - 4004.978: 99.9145% ( 68) 00:13:21.170 4004.978 - 4029.250: 100.0000% ( 11) 00:13:21.170 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:21.170 [ 00:13:21.170 { 00:13:21.170 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:21.170 "subtype": "Discovery", 00:13:21.170 "listen_addresses": [], 00:13:21.170 "allow_any_host": true, 00:13:21.170 "hosts": [] 00:13:21.170 }, 00:13:21.170 { 00:13:21.170 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:21.170 "subtype": "NVMe", 00:13:21.170 "listen_addresses": [ 00:13:21.170 { 00:13:21.170 "trtype": "VFIOUSER", 00:13:21.170 "adrfam": "IPv4", 00:13:21.170 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:21.170 "trsvcid": "0" 00:13:21.170 } 00:13:21.170 ], 00:13:21.170 "allow_any_host": true, 00:13:21.170 "hosts": [], 00:13:21.170 "serial_number": "SPDK1", 00:13:21.170 "model_number": "SPDK bdev Controller", 00:13:21.170 "max_namespaces": 32, 00:13:21.170 "min_cntlid": 1, 00:13:21.170 "max_cntlid": 65519, 00:13:21.170 "namespaces": [ 00:13:21.170 { 00:13:21.170 "nsid": 1, 00:13:21.170 "bdev_name": "Malloc1", 00:13:21.170 "name": "Malloc1", 00:13:21.170 "nguid": "E33593F8AE0449D39DB0BE04F4B52EA8", 00:13:21.170 "uuid": "e33593f8-ae04-49d3-9db0-be04f4b52ea8" 00:13:21.170 }, 00:13:21.170 { 00:13:21.170 "nsid": 2, 00:13:21.170 "bdev_name": "Malloc3", 00:13:21.170 "name": "Malloc3", 00:13:21.170 "nguid": "01504B1F14904D948F033CB40CC656E0", 00:13:21.170 "uuid": "01504b1f-1490-4d94-8f03-3cb40cc656e0" 00:13:21.170 } 00:13:21.170 ] 00:13:21.170 }, 00:13:21.170 { 00:13:21.170 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:21.170 "subtype": "NVMe", 00:13:21.170 "listen_addresses": [ 00:13:21.170 { 00:13:21.170 "trtype": "VFIOUSER", 00:13:21.170 "adrfam": "IPv4", 00:13:21.170 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:21.170 "trsvcid": "0" 00:13:21.170 } 00:13:21.170 ], 00:13:21.170 "allow_any_host": true, 00:13:21.170 "hosts": [], 00:13:21.170 "serial_number": "SPDK2", 00:13:21.170 "model_number": "SPDK bdev Controller", 00:13:21.170 "max_namespaces": 32, 00:13:21.170 "min_cntlid": 1, 00:13:21.170 "max_cntlid": 65519, 00:13:21.170 "namespaces": [ 00:13:21.170 { 00:13:21.170 "nsid": 1, 00:13:21.170 "bdev_name": "Malloc2", 00:13:21.170 "name": "Malloc2", 00:13:21.170 "nguid": "EAA88A5FF01E4297987A7B05F96FA2F8", 00:13:21.170 "uuid": "eaa88a5f-f01e-4297-987a-7b05f96fa2f8" 00:13:21.170 } 00:13:21.170 ] 00:13:21.170 } 00:13:21.170 ] 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3950797 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:21.170 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:21.170 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:21.170 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:21.429 [2024-11-26 20:54:12.175226] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.429 Malloc4 00:13:21.429 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:21.687 [2024-11-26 20:54:12.570184] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.687 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:21.687 Asynchronous Event Request test 00:13:21.687 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.687 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.687 Registering asynchronous event callbacks... 00:13:21.688 Starting namespace attribute notice tests for all controllers... 00:13:21.688 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:21.688 aer_cb - Changed Namespace 00:13:21.688 Cleaning up... 00:13:21.945 [ 00:13:21.945 { 00:13:21.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:21.945 "subtype": "Discovery", 00:13:21.945 "listen_addresses": [], 00:13:21.945 "allow_any_host": true, 00:13:21.945 "hosts": [] 00:13:21.945 }, 00:13:21.945 { 00:13:21.945 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:21.945 "subtype": "NVMe", 00:13:21.945 "listen_addresses": [ 00:13:21.945 { 00:13:21.945 "trtype": "VFIOUSER", 00:13:21.945 "adrfam": "IPv4", 00:13:21.945 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:21.945 "trsvcid": "0" 00:13:21.945 } 00:13:21.945 ], 00:13:21.945 "allow_any_host": true, 00:13:21.945 "hosts": [], 00:13:21.945 "serial_number": "SPDK1", 00:13:21.945 "model_number": "SPDK bdev Controller", 00:13:21.945 "max_namespaces": 32, 00:13:21.945 "min_cntlid": 1, 00:13:21.945 "max_cntlid": 65519, 00:13:21.945 "namespaces": [ 00:13:21.945 { 00:13:21.945 "nsid": 1, 00:13:21.945 "bdev_name": "Malloc1", 00:13:21.945 "name": "Malloc1", 00:13:21.945 "nguid": "E33593F8AE0449D39DB0BE04F4B52EA8", 00:13:21.946 "uuid": "e33593f8-ae04-49d3-9db0-be04f4b52ea8" 00:13:21.946 }, 00:13:21.946 { 00:13:21.946 "nsid": 2, 00:13:21.946 "bdev_name": "Malloc3", 00:13:21.946 "name": "Malloc3", 00:13:21.946 "nguid": "01504B1F14904D948F033CB40CC656E0", 00:13:21.946 "uuid": "01504b1f-1490-4d94-8f03-3cb40cc656e0" 00:13:21.946 } 00:13:21.946 ] 00:13:21.946 }, 00:13:21.946 { 00:13:21.946 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:21.946 "subtype": "NVMe", 00:13:21.946 "listen_addresses": [ 00:13:21.946 { 00:13:21.946 "trtype": "VFIOUSER", 00:13:21.946 "adrfam": "IPv4", 00:13:21.946 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:21.946 "trsvcid": "0" 00:13:21.946 } 00:13:21.946 ], 00:13:21.946 "allow_any_host": true, 00:13:21.946 "hosts": [], 00:13:21.946 "serial_number": "SPDK2", 00:13:21.946 "model_number": "SPDK bdev Controller", 00:13:21.946 "max_namespaces": 32, 00:13:21.946 "min_cntlid": 1, 00:13:21.946 "max_cntlid": 65519, 00:13:21.946 "namespaces": [ 00:13:21.946 { 00:13:21.946 "nsid": 1, 00:13:21.946 "bdev_name": "Malloc2", 00:13:21.946 "name": "Malloc2", 00:13:21.946 "nguid": "EAA88A5FF01E4297987A7B05F96FA2F8", 00:13:21.946 "uuid": "eaa88a5f-f01e-4297-987a-7b05f96fa2f8" 00:13:21.946 }, 00:13:21.946 { 00:13:21.946 "nsid": 2, 00:13:21.946 "bdev_name": "Malloc4", 00:13:21.946 "name": "Malloc4", 00:13:21.946 "nguid": "4E3BE0677D6546C68436655814E54A88", 00:13:21.946 "uuid": "4e3be067-7d65-46c6-8436-655814e54a88" 00:13:21.946 } 00:13:21.946 ] 00:13:21.946 } 00:13:21.946 ] 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3950797 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3944577 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3944577 ']' 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3944577 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.946 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3944577 00:13:22.205 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.205 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.205 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3944577' 00:13:22.205 killing process with pid 3944577 00:13:22.205 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3944577 00:13:22.205 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3944577 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3950945 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3950945' 00:13:22.464 Process pid: 3950945 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3950945 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3950945 ']' 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.464 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:22.464 [2024-11-26 20:54:13.277596] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:22.464 [2024-11-26 20:54:13.278606] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:22.464 [2024-11-26 20:54:13.278665] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.464 [2024-11-26 20:54:13.352301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.724 [2024-11-26 20:54:13.416615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.724 [2024-11-26 20:54:13.416674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.724 [2024-11-26 20:54:13.416702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.724 [2024-11-26 20:54:13.416717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.724 [2024-11-26 20:54:13.416728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.724 [2024-11-26 20:54:13.418354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.724 [2024-11-26 20:54:13.418423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.724 [2024-11-26 20:54:13.418471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.724 [2024-11-26 20:54:13.418475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.724 [2024-11-26 20:54:13.516893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:22.724 [2024-11-26 20:54:13.517056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:22.724 [2024-11-26 20:54:13.517369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:22.724 [2024-11-26 20:54:13.517999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:22.724 [2024-11-26 20:54:13.518218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:22.724 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.724 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:22.724 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:23.663 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:24.231 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:24.231 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:24.231 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.231 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:24.231 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:24.489 Malloc1 00:13:24.489 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:24.747 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:25.005 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:25.262 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.262 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:25.262 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:25.520 Malloc2 00:13:25.520 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:25.778 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:26.035 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3950945 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3950945 ']' 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3950945 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3950945 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3950945' 00:13:26.294 killing process with pid 3950945 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3950945 00:13:26.294 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3950945 00:13:26.860 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:26.860 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:26.860 00:13:26.860 real 0m53.497s 00:13:26.860 user 3m26.324s 00:13:26.860 sys 0m3.913s 00:13:26.860 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:26.861 ************************************ 00:13:26.861 END TEST nvmf_vfio_user 00:13:26.861 ************************************ 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.861 ************************************ 00:13:26.861 START TEST nvmf_vfio_user_nvme_compliance 00:13:26.861 ************************************ 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:26.861 * Looking for test storage... 00:13:26.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:26.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.861 --rc genhtml_branch_coverage=1 00:13:26.861 --rc genhtml_function_coverage=1 00:13:26.861 --rc genhtml_legend=1 00:13:26.861 --rc geninfo_all_blocks=1 00:13:26.861 --rc geninfo_unexecuted_blocks=1 00:13:26.861 00:13:26.861 ' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:26.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.861 --rc genhtml_branch_coverage=1 00:13:26.861 --rc genhtml_function_coverage=1 00:13:26.861 --rc genhtml_legend=1 00:13:26.861 --rc geninfo_all_blocks=1 00:13:26.861 --rc geninfo_unexecuted_blocks=1 00:13:26.861 00:13:26.861 ' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:26.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.861 --rc genhtml_branch_coverage=1 00:13:26.861 --rc genhtml_function_coverage=1 00:13:26.861 --rc genhtml_legend=1 00:13:26.861 --rc geninfo_all_blocks=1 00:13:26.861 --rc geninfo_unexecuted_blocks=1 00:13:26.861 00:13:26.861 ' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:26.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.861 --rc genhtml_branch_coverage=1 00:13:26.861 --rc genhtml_function_coverage=1 00:13:26.861 --rc genhtml_legend=1 00:13:26.861 --rc geninfo_all_blocks=1 00:13:26.861 --rc geninfo_unexecuted_blocks=1 00:13:26.861 00:13:26.861 ' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.861 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3951552 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3951552' 00:13:26.862 Process pid: 3951552 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3951552 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3951552 ']' 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.862 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:27.121 [2024-11-26 20:54:17.802953] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:13:27.121 [2024-11-26 20:54:17.803030] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.121 [2024-11-26 20:54:17.878215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.121 [2024-11-26 20:54:17.943472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.121 [2024-11-26 20:54:17.943549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.121 [2024-11-26 20:54:17.943565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.121 [2024-11-26 20:54:17.943596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.121 [2024-11-26 20:54:17.943608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.121 [2024-11-26 20:54:17.945260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.121 [2024-11-26 20:54:17.945313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.121 [2024-11-26 20:54:17.945317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.381 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.381 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:27.381 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 malloc0 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.318 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:28.577 00:13:28.577 00:13:28.577 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.577 http://cunit.sourceforge.net/ 00:13:28.577 00:13:28.577 00:13:28.577 Suite: nvme_compliance 00:13:28.577 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 20:54:19.324208] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.577 [2024-11-26 20:54:19.325722] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:28.577 [2024-11-26 20:54:19.325747] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:28.577 [2024-11-26 20:54:19.325758] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:28.577 [2024-11-26 20:54:19.327232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.577 passed 00:13:28.578 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 20:54:19.412794] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.578 [2024-11-26 20:54:19.415814] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.578 passed 00:13:28.578 Test: admin_identify_ns ...[2024-11-26 20:54:19.501182] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.834 [2024-11-26 20:54:19.560721] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:28.834 [2024-11-26 20:54:19.568706] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:28.834 [2024-11-26 20:54:19.589842] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.834 passed 00:13:28.834 Test: admin_get_features_mandatory_features ...[2024-11-26 20:54:19.672915] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.834 [2024-11-26 20:54:19.675939] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.834 passed 00:13:28.834 Test: admin_get_features_optional_features ...[2024-11-26 20:54:19.759476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:28.834 [2024-11-26 20:54:19.762495] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.091 passed 00:13:29.091 Test: admin_set_features_number_of_queues ...[2024-11-26 20:54:19.846184] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.091 [2024-11-26 20:54:19.951800] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.091 passed 00:13:29.349 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 20:54:20.037345] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.349 [2024-11-26 20:54:20.040373] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.349 passed 00:13:29.349 Test: admin_get_log_page_with_lpo ...[2024-11-26 20:54:20.126465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.349 [2024-11-26 20:54:20.194725] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:29.349 [2024-11-26 20:54:20.207783] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.349 passed 00:13:29.609 Test: fabric_property_get ...[2024-11-26 20:54:20.290167] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.609 [2024-11-26 20:54:20.291445] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:29.609 [2024-11-26 20:54:20.293193] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.609 passed 00:13:29.609 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 20:54:20.381841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.609 [2024-11-26 20:54:20.383174] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:29.609 [2024-11-26 20:54:20.384862] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.609 passed 00:13:29.609 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 20:54:20.467469] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.868 [2024-11-26 20:54:20.550708] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:29.868 [2024-11-26 20:54:20.566710] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:29.868 [2024-11-26 20:54:20.571913] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.868 passed 00:13:29.868 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 20:54:20.655882] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.868 [2024-11-26 20:54:20.657222] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:29.868 [2024-11-26 20:54:20.658903] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.868 passed 00:13:29.868 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 20:54:20.742021] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.128 [2024-11-26 20:54:20.817700] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:30.129 [2024-11-26 20:54:20.841700] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.129 [2024-11-26 20:54:20.846800] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.129 passed 00:13:30.129 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 20:54:20.929955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.129 [2024-11-26 20:54:20.931300] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:30.129 [2024-11-26 20:54:20.931346] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:30.129 [2024-11-26 20:54:20.932992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.129 passed 00:13:30.129 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 20:54:21.018600] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.388 [2024-11-26 20:54:21.115694] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:30.388 [2024-11-26 20:54:21.123697] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:30.388 [2024-11-26 20:54:21.131700] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:30.388 [2024-11-26 20:54:21.139694] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:30.388 [2024-11-26 20:54:21.168789] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.388 passed 00:13:30.388 Test: admin_create_io_sq_verify_pc ...[2024-11-26 20:54:21.252318] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.388 [2024-11-26 20:54:21.268714] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:30.388 [2024-11-26 20:54:21.286762] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.388 passed 00:13:30.646 Test: admin_create_io_qp_max_qps ...[2024-11-26 20:54:21.368296] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.588 [2024-11-26 20:54:22.468719] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:32.155 [2024-11-26 20:54:22.848164] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.155 passed 00:13:32.155 Test: admin_create_io_sq_shared_cq ...[2024-11-26 20:54:22.930175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.155 [2024-11-26 20:54:23.062709] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:32.416 [2024-11-26 20:54:23.099785] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.416 passed 00:13:32.416 00:13:32.416 Run Summary: Type Total Ran Passed Failed Inactive 00:13:32.416 suites 1 1 n/a 0 0 00:13:32.416 tests 18 18 18 0 0 00:13:32.416 asserts 360 360 360 0 n/a 00:13:32.416 00:13:32.416 Elapsed time = 1.564 seconds 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3951552 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3951552 ']' 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3951552 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951552 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951552' 00:13:32.416 killing process with pid 3951552 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3951552 00:13:32.416 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3951552 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:32.675 00:13:32.675 real 0m5.874s 00:13:32.675 user 0m16.387s 00:13:32.675 sys 0m0.564s 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:32.675 ************************************ 00:13:32.675 END TEST nvmf_vfio_user_nvme_compliance 00:13:32.675 ************************************ 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.675 ************************************ 00:13:32.675 START TEST nvmf_vfio_user_fuzz 00:13:32.675 ************************************ 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:32.675 * Looking for test storage... 00:13:32.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:32.675 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.934 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.935 --rc genhtml_branch_coverage=1 00:13:32.935 --rc genhtml_function_coverage=1 00:13:32.935 --rc genhtml_legend=1 00:13:32.935 --rc geninfo_all_blocks=1 00:13:32.935 --rc geninfo_unexecuted_blocks=1 00:13:32.935 00:13:32.935 ' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.935 --rc genhtml_branch_coverage=1 00:13:32.935 --rc genhtml_function_coverage=1 00:13:32.935 --rc genhtml_legend=1 00:13:32.935 --rc geninfo_all_blocks=1 00:13:32.935 --rc geninfo_unexecuted_blocks=1 00:13:32.935 00:13:32.935 ' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.935 --rc genhtml_branch_coverage=1 00:13:32.935 --rc genhtml_function_coverage=1 00:13:32.935 --rc genhtml_legend=1 00:13:32.935 --rc geninfo_all_blocks=1 00:13:32.935 --rc geninfo_unexecuted_blocks=1 00:13:32.935 00:13:32.935 ' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.935 --rc genhtml_branch_coverage=1 00:13:32.935 --rc genhtml_function_coverage=1 00:13:32.935 --rc genhtml_legend=1 00:13:32.935 --rc geninfo_all_blocks=1 00:13:32.935 --rc geninfo_unexecuted_blocks=1 00:13:32.935 00:13:32.935 ' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3952398 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3952398' 00:13:32.935 Process pid: 3952398 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3952398 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3952398 ']' 00:13:32.935 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.936 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.936 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.936 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.936 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:33.195 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.195 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:33.195 20:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:34.133 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:34.134 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.134 20:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.134 malloc0 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:34.134 20:54:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:06.282 Fuzzing completed. Shutting down the fuzz application 00:14:06.282 00:14:06.282 Dumping successful admin opcodes: 00:14:06.282 9, 10, 00:14:06.282 Dumping successful io opcodes: 00:14:06.282 0, 00:14:06.282 NS: 0x20000081ef00 I/O qp, Total commands completed: 741627, total successful commands: 2871, random_seed: 2483634944 00:14:06.282 NS: 0x20000081ef00 admin qp, Total commands completed: 100944, total successful commands: 24, random_seed: 1699630528 00:14:06.282 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3952398 ']' 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3952398' 00:14:06.283 killing process with pid 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3952398 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:06.283 00:14:06.283 real 0m32.321s 00:14:06.283 user 0m34.340s 00:14:06.283 sys 0m26.868s 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:06.283 ************************************ 00:14:06.283 END TEST nvmf_vfio_user_fuzz 00:14:06.283 ************************************ 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.283 ************************************ 00:14:06.283 START TEST nvmf_auth_target 00:14:06.283 ************************************ 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.283 * Looking for test storage... 00:14:06.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.283 20:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.283 --rc genhtml_branch_coverage=1 00:14:06.283 --rc genhtml_function_coverage=1 00:14:06.283 --rc genhtml_legend=1 00:14:06.283 --rc geninfo_all_blocks=1 00:14:06.283 --rc geninfo_unexecuted_blocks=1 00:14:06.283 00:14:06.283 ' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.283 --rc genhtml_branch_coverage=1 00:14:06.283 --rc genhtml_function_coverage=1 00:14:06.283 --rc genhtml_legend=1 00:14:06.283 --rc geninfo_all_blocks=1 00:14:06.283 --rc geninfo_unexecuted_blocks=1 00:14:06.283 00:14:06.283 ' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.283 --rc genhtml_branch_coverage=1 00:14:06.283 --rc genhtml_function_coverage=1 00:14:06.283 --rc genhtml_legend=1 00:14:06.283 --rc geninfo_all_blocks=1 00:14:06.283 --rc geninfo_unexecuted_blocks=1 00:14:06.283 00:14:06.283 ' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.283 --rc genhtml_branch_coverage=1 00:14:06.283 --rc genhtml_function_coverage=1 00:14:06.283 --rc genhtml_legend=1 00:14:06.283 --rc geninfo_all_blocks=1 00:14:06.283 --rc geninfo_unexecuted_blocks=1 00:14:06.283 00:14:06.283 ' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.283 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.284 20:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:07.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:07.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.224 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:07.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:07.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.225 20:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:14:07.225 00:14:07.225 --- 10.0.0.2 ping statistics --- 00:14:07.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.225 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:14:07.225 00:14:07.225 --- 10.0.0.1 ping statistics --- 00:14:07.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.225 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3957733 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3957733 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3957733 ']' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.225 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3957753 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e01a9fd5237f1182ed0372dace317b40a3e7789a7aa33189 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hpq 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e01a9fd5237f1182ed0372dace317b40a3e7789a7aa33189 0 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e01a9fd5237f1182ed0372dace317b40a3e7789a7aa33189 0 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e01a9fd5237f1182ed0372dace317b40a3e7789a7aa33189 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hpq 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hpq 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Hpq 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=771686c6c720b45019d74ef171cb4714d633571ac9765a878adc43f715307302 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kTT 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 771686c6c720b45019d74ef171cb4714d633571ac9765a878adc43f715307302 3 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 771686c6c720b45019d74ef171cb4714d633571ac9765a878adc43f715307302 3 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=771686c6c720b45019d74ef171cb4714d633571ac9765a878adc43f715307302 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kTT 00:14:07.797 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kTT 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.kTT 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9b1d69782d983ee723c2b6d327c44d5c 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aEs 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9b1d69782d983ee723c2b6d327c44d5c 1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9b1d69782d983ee723c2b6d327c44d5c 1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9b1d69782d983ee723c2b6d327c44d5c 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aEs 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aEs 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.aEs 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3158dff07152063ec5a561bf7bb39aa142d861e6b681b6bd 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5DG 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3158dff07152063ec5a561bf7bb39aa142d861e6b681b6bd 2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3158dff07152063ec5a561bf7bb39aa142d861e6b681b6bd 2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3158dff07152063ec5a561bf7bb39aa142d861e6b681b6bd 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5DG 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5DG 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.5DG 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c22b15c930cf59d46d37f8dc608ac84b5e6bf01035b346a5 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i9Z 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c22b15c930cf59d46d37f8dc608ac84b5e6bf01035b346a5 2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c22b15c930cf59d46d37f8dc608ac84b5e6bf01035b346a5 2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c22b15c930cf59d46d37f8dc608ac84b5e6bf01035b346a5 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:07.798 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i9Z 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i9Z 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.i9Z 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.057 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2819839a1f84589cdacbd4d6fce24ba 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IJN 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2819839a1f84589cdacbd4d6fce24ba 1 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2819839a1f84589cdacbd4d6fce24ba 1 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2819839a1f84589cdacbd4d6fce24ba 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IJN 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IJN 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.IJN 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=022f2e7689ca640f5f396cabfb2329e40681e08fa68d17fb94694203bb9ada62 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2fO 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 022f2e7689ca640f5f396cabfb2329e40681e08fa68d17fb94694203bb9ada62 3 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 022f2e7689ca640f5f396cabfb2329e40681e08fa68d17fb94694203bb9ada62 3 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=022f2e7689ca640f5f396cabfb2329e40681e08fa68d17fb94694203bb9ada62 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2fO 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2fO 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2fO 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3957733 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3957733 ']' 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.058 20:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3957753 /var/tmp/host.sock 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3957753 ']' 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:08.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.316 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hpq 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Hpq 00:14:08.574 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Hpq 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.kTT ]] 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kTT 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kTT 00:14:08.833 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kTT 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aEs 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aEs 00:14:09.091 20:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aEs 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.5DG ]] 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DG 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DG 00:14:09.350 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DG 00:14:09.610 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:09.610 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i9Z 00:14:09.610 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.610 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.869 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.869 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.i9Z 00:14:09.869 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.i9Z 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.IJN ]] 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IJN 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IJN 00:14:10.128 20:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IJN 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2fO 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2fO 00:14:10.387 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2fO 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:10.645 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.904 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.163 00:14:11.421 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.421 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.421 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.680 { 00:14:11.680 "cntlid": 1, 00:14:11.680 "qid": 0, 00:14:11.680 "state": "enabled", 00:14:11.680 "thread": "nvmf_tgt_poll_group_000", 00:14:11.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:11.680 "listen_address": { 00:14:11.680 "trtype": "TCP", 00:14:11.680 "adrfam": "IPv4", 00:14:11.680 "traddr": "10.0.0.2", 00:14:11.680 "trsvcid": "4420" 00:14:11.680 }, 00:14:11.680 "peer_address": { 00:14:11.680 "trtype": "TCP", 00:14:11.680 "adrfam": "IPv4", 00:14:11.680 "traddr": "10.0.0.1", 00:14:11.680 "trsvcid": "55598" 00:14:11.680 }, 00:14:11.680 "auth": { 00:14:11.680 "state": "completed", 00:14:11.680 "digest": "sha256", 00:14:11.680 "dhgroup": "null" 00:14:11.680 } 00:14:11.680 } 00:14:11.680 ]' 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.680 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.939 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:11.939 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.873 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.131 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.698 00:14:13.698 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.698 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.698 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.956 { 00:14:13.956 "cntlid": 3, 00:14:13.956 "qid": 0, 00:14:13.956 "state": "enabled", 00:14:13.956 "thread": "nvmf_tgt_poll_group_000", 00:14:13.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:13.956 "listen_address": { 00:14:13.956 "trtype": "TCP", 00:14:13.956 "adrfam": "IPv4", 00:14:13.956 "traddr": "10.0.0.2", 00:14:13.956 "trsvcid": "4420" 00:14:13.956 }, 00:14:13.956 "peer_address": { 00:14:13.956 "trtype": "TCP", 00:14:13.956 "adrfam": "IPv4", 00:14:13.956 "traddr": "10.0.0.1", 00:14:13.956 "trsvcid": "49420" 00:14:13.956 }, 00:14:13.956 "auth": { 00:14:13.956 "state": "completed", 00:14:13.956 "digest": "sha256", 00:14:13.956 "dhgroup": "null" 00:14:13.956 } 00:14:13.956 } 00:14:13.956 ]' 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.956 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.215 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:14.215 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:15.151 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.151 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.151 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.151 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.409 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.409 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.409 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.409 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.668 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.926 00:14:15.926 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.926 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.926 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.184 { 00:14:16.184 "cntlid": 5, 00:14:16.184 "qid": 0, 00:14:16.184 "state": "enabled", 00:14:16.184 "thread": "nvmf_tgt_poll_group_000", 00:14:16.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:16.184 "listen_address": { 00:14:16.184 "trtype": "TCP", 00:14:16.184 "adrfam": "IPv4", 00:14:16.184 "traddr": "10.0.0.2", 00:14:16.184 "trsvcid": "4420" 00:14:16.184 }, 00:14:16.184 "peer_address": { 00:14:16.184 "trtype": "TCP", 00:14:16.184 "adrfam": "IPv4", 00:14:16.184 "traddr": "10.0.0.1", 00:14:16.184 "trsvcid": "49452" 00:14:16.184 }, 00:14:16.184 "auth": { 00:14:16.184 "state": "completed", 00:14:16.184 "digest": "sha256", 00:14:16.184 "dhgroup": "null" 00:14:16.184 } 00:14:16.184 } 00:14:16.184 ]' 00:14:16.184 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.442 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.700 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:16.700 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.637 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.896 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.463 00:14:18.463 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.463 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.463 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.721 { 00:14:18.721 "cntlid": 7, 00:14:18.721 "qid": 0, 00:14:18.721 "state": "enabled", 00:14:18.721 "thread": "nvmf_tgt_poll_group_000", 00:14:18.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:18.721 "listen_address": { 00:14:18.721 "trtype": "TCP", 00:14:18.721 "adrfam": "IPv4", 00:14:18.721 "traddr": "10.0.0.2", 00:14:18.721 "trsvcid": "4420" 00:14:18.721 }, 00:14:18.721 "peer_address": { 00:14:18.721 "trtype": "TCP", 00:14:18.721 "adrfam": "IPv4", 00:14:18.721 "traddr": "10.0.0.1", 00:14:18.721 "trsvcid": "49480" 00:14:18.721 }, 00:14:18.721 "auth": { 00:14:18.721 "state": "completed", 00:14:18.721 "digest": "sha256", 00:14:18.721 "dhgroup": "null" 00:14:18.721 } 00:14:18.721 } 00:14:18.721 ]' 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.721 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.980 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:18.980 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:19.921 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.180 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.439 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.698 00:14:20.698 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.698 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.698 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.956 { 00:14:20.956 "cntlid": 9, 00:14:20.956 "qid": 0, 00:14:20.956 "state": "enabled", 00:14:20.956 "thread": "nvmf_tgt_poll_group_000", 00:14:20.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:20.956 "listen_address": { 00:14:20.956 "trtype": "TCP", 00:14:20.956 "adrfam": "IPv4", 00:14:20.956 "traddr": "10.0.0.2", 00:14:20.956 "trsvcid": "4420" 00:14:20.956 }, 00:14:20.956 "peer_address": { 00:14:20.956 "trtype": "TCP", 00:14:20.956 "adrfam": "IPv4", 00:14:20.956 "traddr": "10.0.0.1", 00:14:20.956 "trsvcid": "49508" 00:14:20.956 }, 00:14:20.956 "auth": { 00:14:20.956 "state": "completed", 00:14:20.956 "digest": "sha256", 00:14:20.956 "dhgroup": "ffdhe2048" 00:14:20.956 } 00:14:20.956 } 00:14:20.956 ]' 00:14:20.956 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.214 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.215 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.473 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:21.473 20:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.406 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.662 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.225 00:14:23.225 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.225 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.225 20:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.481 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.481 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.482 { 00:14:23.482 "cntlid": 11, 00:14:23.482 "qid": 0, 00:14:23.482 "state": "enabled", 00:14:23.482 "thread": "nvmf_tgt_poll_group_000", 00:14:23.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:23.482 "listen_address": { 00:14:23.482 "trtype": "TCP", 00:14:23.482 "adrfam": "IPv4", 00:14:23.482 "traddr": "10.0.0.2", 00:14:23.482 "trsvcid": "4420" 00:14:23.482 }, 00:14:23.482 "peer_address": { 00:14:23.482 "trtype": "TCP", 00:14:23.482 "adrfam": "IPv4", 00:14:23.482 "traddr": "10.0.0.1", 00:14:23.482 "trsvcid": "47846" 00:14:23.482 }, 00:14:23.482 "auth": { 00:14:23.482 "state": "completed", 00:14:23.482 "digest": "sha256", 00:14:23.482 "dhgroup": "ffdhe2048" 00:14:23.482 } 00:14:23.482 } 00:14:23.482 ]' 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.482 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.739 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:23.739 20:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.672 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.238 20:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.496 00:14:25.496 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.496 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.496 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.754 { 00:14:25.754 "cntlid": 13, 00:14:25.754 "qid": 0, 00:14:25.754 "state": "enabled", 00:14:25.754 "thread": "nvmf_tgt_poll_group_000", 00:14:25.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:25.754 "listen_address": { 00:14:25.754 "trtype": "TCP", 00:14:25.754 "adrfam": "IPv4", 00:14:25.754 "traddr": "10.0.0.2", 00:14:25.754 "trsvcid": "4420" 00:14:25.754 }, 00:14:25.754 "peer_address": { 00:14:25.754 "trtype": "TCP", 00:14:25.754 "adrfam": "IPv4", 00:14:25.754 "traddr": "10.0.0.1", 00:14:25.754 "trsvcid": "47884" 00:14:25.754 }, 00:14:25.754 "auth": { 00:14:25.754 "state": "completed", 00:14:25.754 "digest": "sha256", 00:14:25.754 "dhgroup": "ffdhe2048" 00:14:25.754 } 00:14:25.754 } 00:14:25.754 ]' 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.754 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.012 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:26.012 20:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.385 20:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.385 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.386 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.953 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.953 { 00:14:27.953 "cntlid": 15, 00:14:27.953 "qid": 0, 00:14:27.953 "state": "enabled", 00:14:27.953 "thread": "nvmf_tgt_poll_group_000", 00:14:27.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:27.953 "listen_address": { 00:14:27.953 "trtype": "TCP", 00:14:27.953 "adrfam": "IPv4", 00:14:27.953 "traddr": "10.0.0.2", 00:14:27.953 "trsvcid": "4420" 00:14:27.953 }, 00:14:27.953 "peer_address": { 00:14:27.953 "trtype": "TCP", 00:14:27.953 "adrfam": "IPv4", 00:14:27.953 "traddr": "10.0.0.1", 00:14:27.953 "trsvcid": "47898" 00:14:27.953 }, 00:14:27.953 "auth": { 00:14:27.953 "state": "completed", 00:14:27.953 "digest": "sha256", 00:14:27.953 "dhgroup": "ffdhe2048" 00:14:27.953 } 00:14:27.953 } 00:14:27.953 ]' 00:14:27.953 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.211 20:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.468 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:28.468 20:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.402 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.968 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.226 00:14:30.226 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.226 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.226 20:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.485 { 00:14:30.485 "cntlid": 17, 00:14:30.485 "qid": 0, 00:14:30.485 "state": "enabled", 00:14:30.485 "thread": "nvmf_tgt_poll_group_000", 00:14:30.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:30.485 "listen_address": { 00:14:30.485 "trtype": "TCP", 00:14:30.485 "adrfam": "IPv4", 00:14:30.485 "traddr": "10.0.0.2", 00:14:30.485 "trsvcid": "4420" 00:14:30.485 }, 00:14:30.485 "peer_address": { 00:14:30.485 "trtype": "TCP", 00:14:30.485 "adrfam": "IPv4", 00:14:30.485 "traddr": "10.0.0.1", 00:14:30.485 "trsvcid": "47938" 00:14:30.485 }, 00:14:30.485 "auth": { 00:14:30.485 "state": "completed", 00:14:30.485 "digest": "sha256", 00:14:30.485 "dhgroup": "ffdhe3072" 00:14:30.485 } 00:14:30.485 } 00:14:30.485 ]' 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.485 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.051 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:31.051 20:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.985 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.243 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.502 00:14:32.502 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.502 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.502 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.760 { 00:14:32.760 "cntlid": 19, 00:14:32.760 "qid": 0, 00:14:32.760 "state": "enabled", 00:14:32.760 "thread": "nvmf_tgt_poll_group_000", 00:14:32.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:32.760 "listen_address": { 00:14:32.760 "trtype": "TCP", 00:14:32.760 "adrfam": "IPv4", 00:14:32.760 "traddr": "10.0.0.2", 00:14:32.760 "trsvcid": "4420" 00:14:32.760 }, 00:14:32.760 "peer_address": { 00:14:32.760 "trtype": "TCP", 00:14:32.760 "adrfam": "IPv4", 00:14:32.760 "traddr": "10.0.0.1", 00:14:32.760 "trsvcid": "34348" 00:14:32.760 }, 00:14:32.760 "auth": { 00:14:32.760 "state": "completed", 00:14:32.760 "digest": "sha256", 00:14:32.760 "dhgroup": "ffdhe3072" 00:14:32.760 } 00:14:32.760 } 00:14:32.760 ]' 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.760 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.018 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.018 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.018 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.277 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:33.277 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.211 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.469 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.727 00:14:34.727 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.727 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.727 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.006 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.007 { 00:14:35.007 "cntlid": 21, 00:14:35.007 "qid": 0, 00:14:35.007 "state": "enabled", 00:14:35.007 "thread": "nvmf_tgt_poll_group_000", 00:14:35.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:35.007 "listen_address": { 00:14:35.007 "trtype": "TCP", 00:14:35.007 "adrfam": "IPv4", 00:14:35.007 "traddr": "10.0.0.2", 00:14:35.007 "trsvcid": "4420" 00:14:35.007 }, 00:14:35.007 "peer_address": { 00:14:35.007 "trtype": "TCP", 00:14:35.007 "adrfam": "IPv4", 00:14:35.007 "traddr": "10.0.0.1", 00:14:35.007 "trsvcid": "34380" 00:14:35.007 }, 00:14:35.007 "auth": { 00:14:35.007 "state": "completed", 00:14:35.007 "digest": "sha256", 00:14:35.007 "dhgroup": "ffdhe3072" 00:14:35.007 } 00:14:35.007 } 00:14:35.007 ]' 00:14:35.007 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.007 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.335 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.619 20:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:35.619 20:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.553 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.811 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.069 00:14:37.069 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.069 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.069 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.327 { 00:14:37.327 "cntlid": 23, 00:14:37.327 "qid": 0, 00:14:37.327 "state": "enabled", 00:14:37.327 "thread": "nvmf_tgt_poll_group_000", 00:14:37.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:37.327 "listen_address": { 00:14:37.327 "trtype": "TCP", 00:14:37.327 "adrfam": "IPv4", 00:14:37.327 "traddr": "10.0.0.2", 00:14:37.327 "trsvcid": "4420" 00:14:37.327 }, 00:14:37.327 "peer_address": { 00:14:37.327 "trtype": "TCP", 00:14:37.327 "adrfam": "IPv4", 00:14:37.327 "traddr": "10.0.0.1", 00:14:37.327 "trsvcid": "34402" 00:14:37.327 }, 00:14:37.327 "auth": { 00:14:37.327 "state": "completed", 00:14:37.327 "digest": "sha256", 00:14:37.327 "dhgroup": "ffdhe3072" 00:14:37.327 } 00:14:37.327 } 00:14:37.327 ]' 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.327 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.585 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.585 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.585 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.585 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.585 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.843 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:37.843 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.780 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.039 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.605 00:14:39.605 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.605 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.605 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.863 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.863 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.863 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.863 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.864 { 00:14:39.864 "cntlid": 25, 00:14:39.864 "qid": 0, 00:14:39.864 "state": "enabled", 00:14:39.864 "thread": "nvmf_tgt_poll_group_000", 00:14:39.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:39.864 "listen_address": { 00:14:39.864 "trtype": "TCP", 00:14:39.864 "adrfam": "IPv4", 00:14:39.864 "traddr": "10.0.0.2", 00:14:39.864 "trsvcid": "4420" 00:14:39.864 }, 00:14:39.864 "peer_address": { 00:14:39.864 "trtype": "TCP", 00:14:39.864 "adrfam": "IPv4", 00:14:39.864 "traddr": "10.0.0.1", 00:14:39.864 "trsvcid": "34436" 00:14:39.864 }, 00:14:39.864 "auth": { 00:14:39.864 "state": "completed", 00:14:39.864 "digest": "sha256", 00:14:39.864 "dhgroup": "ffdhe4096" 00:14:39.864 } 00:14:39.864 } 00:14:39.864 ]' 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.864 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.122 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.122 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.122 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.380 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:40.380 20:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:41.313 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.314 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.572 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.138 00:14:42.138 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.139 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.139 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.397 { 00:14:42.397 "cntlid": 27, 00:14:42.397 "qid": 0, 00:14:42.397 "state": "enabled", 00:14:42.397 "thread": "nvmf_tgt_poll_group_000", 00:14:42.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:42.397 "listen_address": { 00:14:42.397 "trtype": "TCP", 00:14:42.397 "adrfam": "IPv4", 00:14:42.397 "traddr": "10.0.0.2", 00:14:42.397 "trsvcid": "4420" 00:14:42.397 }, 00:14:42.397 "peer_address": { 00:14:42.397 "trtype": "TCP", 00:14:42.397 "adrfam": "IPv4", 00:14:42.397 "traddr": "10.0.0.1", 00:14:42.397 "trsvcid": "34450" 00:14:42.397 }, 00:14:42.397 "auth": { 00:14:42.397 "state": "completed", 00:14:42.397 "digest": "sha256", 00:14:42.397 "dhgroup": "ffdhe4096" 00:14:42.397 } 00:14:42.397 } 00:14:42.397 ]' 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.397 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.655 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:42.655 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.588 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.155 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.413 00:14:44.413 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.413 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.413 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.671 { 00:14:44.671 "cntlid": 29, 00:14:44.671 "qid": 0, 00:14:44.671 "state": "enabled", 00:14:44.671 "thread": "nvmf_tgt_poll_group_000", 00:14:44.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:44.671 "listen_address": { 00:14:44.671 "trtype": "TCP", 00:14:44.671 "adrfam": "IPv4", 00:14:44.671 "traddr": "10.0.0.2", 00:14:44.671 "trsvcid": "4420" 00:14:44.671 }, 00:14:44.671 "peer_address": { 00:14:44.671 "trtype": "TCP", 00:14:44.671 "adrfam": "IPv4", 00:14:44.671 "traddr": "10.0.0.1", 00:14:44.671 "trsvcid": "41670" 00:14:44.671 }, 00:14:44.671 "auth": { 00:14:44.671 "state": "completed", 00:14:44.671 "digest": "sha256", 00:14:44.671 "dhgroup": "ffdhe4096" 00:14:44.671 } 00:14:44.671 } 00:14:44.671 ]' 00:14:44.671 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.929 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.187 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:45.187 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.121 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.379 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.944 00:14:46.944 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.945 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.945 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.202 { 00:14:47.202 "cntlid": 31, 00:14:47.202 "qid": 0, 00:14:47.202 "state": "enabled", 00:14:47.202 "thread": "nvmf_tgt_poll_group_000", 00:14:47.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:47.202 "listen_address": { 00:14:47.202 "trtype": "TCP", 00:14:47.202 "adrfam": "IPv4", 00:14:47.202 "traddr": "10.0.0.2", 00:14:47.202 "trsvcid": "4420" 00:14:47.202 }, 00:14:47.202 "peer_address": { 00:14:47.202 "trtype": "TCP", 00:14:47.202 "adrfam": "IPv4", 00:14:47.202 "traddr": "10.0.0.1", 00:14:47.202 "trsvcid": "41686" 00:14:47.202 }, 00:14:47.202 "auth": { 00:14:47.202 "state": "completed", 00:14:47.202 "digest": "sha256", 00:14:47.202 "dhgroup": "ffdhe4096" 00:14:47.202 } 00:14:47.202 } 00:14:47.202 ]' 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:47.202 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.461 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.461 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.461 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.718 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:47.719 20:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.651 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.909 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.476 00:14:49.476 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.476 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.476 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.734 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.734 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.734 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.734 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.992 { 00:14:49.992 "cntlid": 33, 00:14:49.992 "qid": 0, 00:14:49.992 "state": "enabled", 00:14:49.992 "thread": "nvmf_tgt_poll_group_000", 00:14:49.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:49.992 "listen_address": { 00:14:49.992 "trtype": "TCP", 00:14:49.992 "adrfam": "IPv4", 00:14:49.992 "traddr": "10.0.0.2", 00:14:49.992 "trsvcid": "4420" 00:14:49.992 }, 00:14:49.992 "peer_address": { 00:14:49.992 "trtype": "TCP", 00:14:49.992 "adrfam": "IPv4", 00:14:49.992 "traddr": "10.0.0.1", 00:14:49.992 "trsvcid": "41722" 00:14:49.992 }, 00:14:49.992 "auth": { 00:14:49.992 "state": "completed", 00:14:49.992 "digest": "sha256", 00:14:49.992 "dhgroup": "ffdhe6144" 00:14:49.992 } 00:14:49.992 } 00:14:49.992 ]' 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.992 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.250 20:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:50.250 20:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.183 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.749 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.749 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.749 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.749 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.315 00:14:52.315 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.315 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.315 20:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.315 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.315 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.315 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.315 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.572 { 00:14:52.572 "cntlid": 35, 00:14:52.572 "qid": 0, 00:14:52.572 "state": "enabled", 00:14:52.572 "thread": "nvmf_tgt_poll_group_000", 00:14:52.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:52.572 "listen_address": { 00:14:52.572 "trtype": "TCP", 00:14:52.572 "adrfam": "IPv4", 00:14:52.572 "traddr": "10.0.0.2", 00:14:52.572 "trsvcid": "4420" 00:14:52.572 }, 00:14:52.572 "peer_address": { 00:14:52.572 "trtype": "TCP", 00:14:52.572 "adrfam": "IPv4", 00:14:52.572 "traddr": "10.0.0.1", 00:14:52.572 "trsvcid": "41754" 00:14:52.572 }, 00:14:52.572 "auth": { 00:14:52.572 "state": "completed", 00:14:52.572 "digest": "sha256", 00:14:52.572 "dhgroup": "ffdhe6144" 00:14:52.572 } 00:14:52.572 } 00:14:52.572 ]' 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.572 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.829 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:52.829 20:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.762 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.326 20:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.892 00:14:54.892 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.892 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.892 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.150 { 00:14:55.150 "cntlid": 37, 00:14:55.150 "qid": 0, 00:14:55.150 "state": "enabled", 00:14:55.150 "thread": "nvmf_tgt_poll_group_000", 00:14:55.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:55.150 "listen_address": { 00:14:55.150 "trtype": "TCP", 00:14:55.150 "adrfam": "IPv4", 00:14:55.150 "traddr": "10.0.0.2", 00:14:55.150 "trsvcid": "4420" 00:14:55.150 }, 00:14:55.150 "peer_address": { 00:14:55.150 "trtype": "TCP", 00:14:55.150 "adrfam": "IPv4", 00:14:55.150 "traddr": "10.0.0.1", 00:14:55.150 "trsvcid": "49446" 00:14:55.150 }, 00:14:55.150 "auth": { 00:14:55.150 "state": "completed", 00:14:55.150 "digest": "sha256", 00:14:55.150 "dhgroup": "ffdhe6144" 00:14:55.150 } 00:14:55.150 } 00:14:55.150 ]' 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.150 20:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.408 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:55.408 20:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:14:56.341 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.599 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:56.857 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.858 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.858 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.858 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.858 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.858 20:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.423 00:14:57.423 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.423 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.423 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.680 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.680 { 00:14:57.680 "cntlid": 39, 00:14:57.680 "qid": 0, 00:14:57.680 "state": "enabled", 00:14:57.680 "thread": "nvmf_tgt_poll_group_000", 00:14:57.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:57.680 "listen_address": { 00:14:57.680 "trtype": "TCP", 00:14:57.680 "adrfam": "IPv4", 00:14:57.680 "traddr": "10.0.0.2", 00:14:57.680 "trsvcid": "4420" 00:14:57.680 }, 00:14:57.680 "peer_address": { 00:14:57.680 "trtype": "TCP", 00:14:57.680 "adrfam": "IPv4", 00:14:57.681 "traddr": "10.0.0.1", 00:14:57.681 "trsvcid": "49472" 00:14:57.681 }, 00:14:57.681 "auth": { 00:14:57.681 "state": "completed", 00:14:57.681 "digest": "sha256", 00:14:57.681 "dhgroup": "ffdhe6144" 00:14:57.681 } 00:14:57.681 } 00:14:57.681 ]' 00:14:57.681 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.681 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.681 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.938 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.938 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.938 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.938 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.938 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.196 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:58.196 20:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.128 20:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.386 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.319 00:15:00.319 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.319 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.319 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.577 { 00:15:00.577 "cntlid": 41, 00:15:00.577 "qid": 0, 00:15:00.577 "state": "enabled", 00:15:00.577 "thread": "nvmf_tgt_poll_group_000", 00:15:00.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:00.577 "listen_address": { 00:15:00.577 "trtype": "TCP", 00:15:00.577 "adrfam": "IPv4", 00:15:00.577 "traddr": "10.0.0.2", 00:15:00.577 "trsvcid": "4420" 00:15:00.577 }, 00:15:00.577 "peer_address": { 00:15:00.577 "trtype": "TCP", 00:15:00.577 "adrfam": "IPv4", 00:15:00.577 "traddr": "10.0.0.1", 00:15:00.577 "trsvcid": "49504" 00:15:00.577 }, 00:15:00.577 "auth": { 00:15:00.577 "state": "completed", 00:15:00.577 "digest": "sha256", 00:15:00.577 "dhgroup": "ffdhe8192" 00:15:00.577 } 00:15:00.577 } 00:15:00.577 ]' 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.577 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.837 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:00.837 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.211 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.211 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.145 00:15:03.145 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.145 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.145 20:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.403 { 00:15:03.403 "cntlid": 43, 00:15:03.403 "qid": 0, 00:15:03.403 "state": "enabled", 00:15:03.403 "thread": "nvmf_tgt_poll_group_000", 00:15:03.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:03.403 "listen_address": { 00:15:03.403 "trtype": "TCP", 00:15:03.403 "adrfam": "IPv4", 00:15:03.403 "traddr": "10.0.0.2", 00:15:03.403 "trsvcid": "4420" 00:15:03.403 }, 00:15:03.403 "peer_address": { 00:15:03.403 "trtype": "TCP", 00:15:03.403 "adrfam": "IPv4", 00:15:03.403 "traddr": "10.0.0.1", 00:15:03.403 "trsvcid": "35116" 00:15:03.403 }, 00:15:03.403 "auth": { 00:15:03.403 "state": "completed", 00:15:03.403 "digest": "sha256", 00:15:03.403 "dhgroup": "ffdhe8192" 00:15:03.403 } 00:15:03.403 } 00:15:03.403 ]' 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.403 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.660 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.660 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.660 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.918 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:03.918 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.852 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.110 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.107 00:15:06.107 20:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.107 20:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.107 20:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.365 { 00:15:06.365 "cntlid": 45, 00:15:06.365 "qid": 0, 00:15:06.365 "state": "enabled", 00:15:06.365 "thread": "nvmf_tgt_poll_group_000", 00:15:06.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:06.365 "listen_address": { 00:15:06.365 "trtype": "TCP", 00:15:06.365 "adrfam": "IPv4", 00:15:06.365 "traddr": "10.0.0.2", 00:15:06.365 "trsvcid": "4420" 00:15:06.365 }, 00:15:06.365 "peer_address": { 00:15:06.365 "trtype": "TCP", 00:15:06.365 "adrfam": "IPv4", 00:15:06.365 "traddr": "10.0.0.1", 00:15:06.365 "trsvcid": "35144" 00:15:06.365 }, 00:15:06.365 "auth": { 00:15:06.365 "state": "completed", 00:15:06.365 "digest": "sha256", 00:15:06.365 "dhgroup": "ffdhe8192" 00:15:06.365 } 00:15:06.365 } 00:15:06.365 ]' 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.365 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.931 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:06.931 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.865 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.124 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.059 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.059 { 00:15:09.059 "cntlid": 47, 00:15:09.059 "qid": 0, 00:15:09.059 "state": "enabled", 00:15:09.059 "thread": "nvmf_tgt_poll_group_000", 00:15:09.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:09.059 "listen_address": { 00:15:09.059 "trtype": "TCP", 00:15:09.059 "adrfam": "IPv4", 00:15:09.059 "traddr": "10.0.0.2", 00:15:09.059 "trsvcid": "4420" 00:15:09.059 }, 00:15:09.059 "peer_address": { 00:15:09.059 "trtype": "TCP", 00:15:09.059 "adrfam": "IPv4", 00:15:09.059 "traddr": "10.0.0.1", 00:15:09.059 "trsvcid": "35168" 00:15:09.059 }, 00:15:09.059 "auth": { 00:15:09.059 "state": "completed", 00:15:09.059 "digest": "sha256", 00:15:09.059 "dhgroup": "ffdhe8192" 00:15:09.059 } 00:15:09.059 } 00:15:09.059 ]' 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.059 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.318 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.318 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.318 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.318 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.318 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.576 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:09.576 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.510 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.076 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.334 00:15:11.334 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.335 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.335 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.593 { 00:15:11.593 "cntlid": 49, 00:15:11.593 "qid": 0, 00:15:11.593 "state": "enabled", 00:15:11.593 "thread": "nvmf_tgt_poll_group_000", 00:15:11.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:11.593 "listen_address": { 00:15:11.593 "trtype": "TCP", 00:15:11.593 "adrfam": "IPv4", 00:15:11.593 "traddr": "10.0.0.2", 00:15:11.593 "trsvcid": "4420" 00:15:11.593 }, 00:15:11.593 "peer_address": { 00:15:11.593 "trtype": "TCP", 00:15:11.593 "adrfam": "IPv4", 00:15:11.593 "traddr": "10.0.0.1", 00:15:11.593 "trsvcid": "35202" 00:15:11.593 }, 00:15:11.593 "auth": { 00:15:11.593 "state": "completed", 00:15:11.593 "digest": "sha384", 00:15:11.593 "dhgroup": "null" 00:15:11.593 } 00:15:11.593 } 00:15:11.593 ]' 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.593 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.853 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:11.853 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.227 20:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.227 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.793 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.793 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.793 { 00:15:13.793 "cntlid": 51, 00:15:13.793 "qid": 0, 00:15:13.793 "state": "enabled", 00:15:13.793 "thread": "nvmf_tgt_poll_group_000", 00:15:13.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:13.793 "listen_address": { 00:15:13.793 "trtype": "TCP", 00:15:13.793 "adrfam": "IPv4", 00:15:13.793 "traddr": "10.0.0.2", 00:15:13.793 "trsvcid": "4420" 00:15:13.793 }, 00:15:13.793 "peer_address": { 00:15:13.793 "trtype": "TCP", 00:15:13.793 "adrfam": "IPv4", 00:15:13.793 "traddr": "10.0.0.1", 00:15:13.793 "trsvcid": "42818" 00:15:13.793 }, 00:15:13.793 "auth": { 00:15:13.793 "state": "completed", 00:15:13.793 "digest": "sha384", 00:15:13.793 "dhgroup": "null" 00:15:13.793 } 00:15:13.793 } 00:15:13.793 ]' 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.050 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.309 20:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:14.309 20:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.243 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.811 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.069 00:15:16.069 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.069 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.069 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.327 { 00:15:16.327 "cntlid": 53, 00:15:16.327 "qid": 0, 00:15:16.327 "state": "enabled", 00:15:16.327 "thread": "nvmf_tgt_poll_group_000", 00:15:16.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:16.327 "listen_address": { 00:15:16.327 "trtype": "TCP", 00:15:16.327 "adrfam": "IPv4", 00:15:16.327 "traddr": "10.0.0.2", 00:15:16.327 "trsvcid": "4420" 00:15:16.327 }, 00:15:16.327 "peer_address": { 00:15:16.327 "trtype": "TCP", 00:15:16.327 "adrfam": "IPv4", 00:15:16.327 "traddr": "10.0.0.1", 00:15:16.327 "trsvcid": "42852" 00:15:16.327 }, 00:15:16.327 "auth": { 00:15:16.327 "state": "completed", 00:15:16.327 "digest": "sha384", 00:15:16.327 "dhgroup": "null" 00:15:16.327 } 00:15:16.327 } 00:15:16.327 ]' 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.327 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.892 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:16.892 20:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:17.824 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.081 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.338 00:15:18.338 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.338 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.338 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.595 { 00:15:18.595 "cntlid": 55, 00:15:18.595 "qid": 0, 00:15:18.595 "state": "enabled", 00:15:18.595 "thread": "nvmf_tgt_poll_group_000", 00:15:18.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:18.595 "listen_address": { 00:15:18.595 "trtype": "TCP", 00:15:18.595 "adrfam": "IPv4", 00:15:18.595 "traddr": "10.0.0.2", 00:15:18.595 "trsvcid": "4420" 00:15:18.595 }, 00:15:18.595 "peer_address": { 00:15:18.595 "trtype": "TCP", 00:15:18.595 "adrfam": "IPv4", 00:15:18.595 "traddr": "10.0.0.1", 00:15:18.595 "trsvcid": "42882" 00:15:18.595 }, 00:15:18.595 "auth": { 00:15:18.595 "state": "completed", 00:15:18.595 "digest": "sha384", 00:15:18.595 "dhgroup": "null" 00:15:18.595 } 00:15:18.595 } 00:15:18.595 ]' 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:18.595 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.892 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.892 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.892 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.150 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:19.150 20:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.080 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.337 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.595 00:15:20.595 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.595 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.595 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.852 { 00:15:20.852 "cntlid": 57, 00:15:20.852 "qid": 0, 00:15:20.852 "state": "enabled", 00:15:20.852 "thread": "nvmf_tgt_poll_group_000", 00:15:20.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:20.852 "listen_address": { 00:15:20.852 "trtype": "TCP", 00:15:20.852 "adrfam": "IPv4", 00:15:20.852 "traddr": "10.0.0.2", 00:15:20.852 "trsvcid": "4420" 00:15:20.852 }, 00:15:20.852 "peer_address": { 00:15:20.852 "trtype": "TCP", 00:15:20.852 "adrfam": "IPv4", 00:15:20.852 "traddr": "10.0.0.1", 00:15:20.852 "trsvcid": "42908" 00:15:20.852 }, 00:15:20.852 "auth": { 00:15:20.852 "state": "completed", 00:15:20.852 "digest": "sha384", 00:15:20.852 "dhgroup": "ffdhe2048" 00:15:20.852 } 00:15:20.852 } 00:15:20.852 ]' 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.852 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.111 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.111 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.111 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.111 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.111 20:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.369 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:21.369 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.302 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.560 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.125 00:15:23.125 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.125 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.125 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.384 { 00:15:23.384 "cntlid": 59, 00:15:23.384 "qid": 0, 00:15:23.384 "state": "enabled", 00:15:23.384 "thread": "nvmf_tgt_poll_group_000", 00:15:23.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:23.384 "listen_address": { 00:15:23.384 "trtype": "TCP", 00:15:23.384 "adrfam": "IPv4", 00:15:23.384 "traddr": "10.0.0.2", 00:15:23.384 "trsvcid": "4420" 00:15:23.384 }, 00:15:23.384 "peer_address": { 00:15:23.384 "trtype": "TCP", 00:15:23.384 "adrfam": "IPv4", 00:15:23.384 "traddr": "10.0.0.1", 00:15:23.384 "trsvcid": "36496" 00:15:23.384 }, 00:15:23.384 "auth": { 00:15:23.384 "state": "completed", 00:15:23.384 "digest": "sha384", 00:15:23.384 "dhgroup": "ffdhe2048" 00:15:23.384 } 00:15:23.384 } 00:15:23.384 ]' 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.384 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.643 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:23.643 20:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:24.575 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.575 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.575 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.575 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.576 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.576 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.576 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.576 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.141 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.399 00:15:25.399 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.399 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.399 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.657 { 00:15:25.657 "cntlid": 61, 00:15:25.657 "qid": 0, 00:15:25.657 "state": "enabled", 00:15:25.657 "thread": "nvmf_tgt_poll_group_000", 00:15:25.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:25.657 "listen_address": { 00:15:25.657 "trtype": "TCP", 00:15:25.657 "adrfam": "IPv4", 00:15:25.657 "traddr": "10.0.0.2", 00:15:25.657 "trsvcid": "4420" 00:15:25.657 }, 00:15:25.657 "peer_address": { 00:15:25.657 "trtype": "TCP", 00:15:25.657 "adrfam": "IPv4", 00:15:25.657 "traddr": "10.0.0.1", 00:15:25.657 "trsvcid": "36522" 00:15:25.657 }, 00:15:25.657 "auth": { 00:15:25.657 "state": "completed", 00:15:25.657 "digest": "sha384", 00:15:25.657 "dhgroup": "ffdhe2048" 00:15:25.657 } 00:15:25.657 } 00:15:25.657 ]' 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.657 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.916 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.916 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.916 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.174 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:26.174 20:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:27.109 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.367 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.933 00:15:27.933 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.933 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.933 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.192 { 00:15:28.192 "cntlid": 63, 00:15:28.192 "qid": 0, 00:15:28.192 "state": "enabled", 00:15:28.192 "thread": "nvmf_tgt_poll_group_000", 00:15:28.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:28.192 "listen_address": { 00:15:28.192 "trtype": "TCP", 00:15:28.192 "adrfam": "IPv4", 00:15:28.192 "traddr": "10.0.0.2", 00:15:28.192 "trsvcid": "4420" 00:15:28.192 }, 00:15:28.192 "peer_address": { 00:15:28.192 "trtype": "TCP", 00:15:28.192 "adrfam": "IPv4", 00:15:28.192 "traddr": "10.0.0.1", 00:15:28.192 "trsvcid": "36558" 00:15:28.192 }, 00:15:28.192 "auth": { 00:15:28.192 "state": "completed", 00:15:28.192 "digest": "sha384", 00:15:28.192 "dhgroup": "ffdhe2048" 00:15:28.192 } 00:15:28.192 } 00:15:28.192 ]' 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.192 20:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.192 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.192 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.192 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.192 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.192 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.449 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:28.449 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.824 20:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.389 00:15:30.390 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.390 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.390 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.647 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.647 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.647 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.647 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.647 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.648 { 00:15:30.648 "cntlid": 65, 00:15:30.648 "qid": 0, 00:15:30.648 "state": "enabled", 00:15:30.648 "thread": "nvmf_tgt_poll_group_000", 00:15:30.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:30.648 "listen_address": { 00:15:30.648 "trtype": "TCP", 00:15:30.648 "adrfam": "IPv4", 00:15:30.648 "traddr": "10.0.0.2", 00:15:30.648 "trsvcid": "4420" 00:15:30.648 }, 00:15:30.648 "peer_address": { 00:15:30.648 "trtype": "TCP", 00:15:30.648 "adrfam": "IPv4", 00:15:30.648 "traddr": "10.0.0.1", 00:15:30.648 "trsvcid": "36590" 00:15:30.648 }, 00:15:30.648 "auth": { 00:15:30.648 "state": "completed", 00:15:30.648 "digest": "sha384", 00:15:30.648 "dhgroup": "ffdhe3072" 00:15:30.648 } 00:15:30.648 } 00:15:30.648 ]' 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.648 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.905 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:30.906 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.839 20:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.097 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.355 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.612 00:15:32.612 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.612 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.612 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.869 { 00:15:32.869 "cntlid": 67, 00:15:32.869 "qid": 0, 00:15:32.869 "state": "enabled", 00:15:32.869 "thread": "nvmf_tgt_poll_group_000", 00:15:32.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:32.869 "listen_address": { 00:15:32.869 "trtype": "TCP", 00:15:32.869 "adrfam": "IPv4", 00:15:32.869 "traddr": "10.0.0.2", 00:15:32.869 "trsvcid": "4420" 00:15:32.869 }, 00:15:32.869 "peer_address": { 00:15:32.869 "trtype": "TCP", 00:15:32.869 "adrfam": "IPv4", 00:15:32.869 "traddr": "10.0.0.1", 00:15:32.869 "trsvcid": "54546" 00:15:32.869 }, 00:15:32.869 "auth": { 00:15:32.869 "state": "completed", 00:15:32.869 "digest": "sha384", 00:15:32.869 "dhgroup": "ffdhe3072" 00:15:32.869 } 00:15:32.869 } 00:15:32.869 ]' 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.869 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.126 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.126 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.126 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.384 20:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:33.384 20:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:34.317 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.576 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.142 00:15:35.142 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.142 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.143 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.401 { 00:15:35.401 "cntlid": 69, 00:15:35.401 "qid": 0, 00:15:35.401 "state": "enabled", 00:15:35.401 "thread": "nvmf_tgt_poll_group_000", 00:15:35.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:35.401 "listen_address": { 00:15:35.401 "trtype": "TCP", 00:15:35.401 "adrfam": "IPv4", 00:15:35.401 "traddr": "10.0.0.2", 00:15:35.401 "trsvcid": "4420" 00:15:35.401 }, 00:15:35.401 "peer_address": { 00:15:35.401 "trtype": "TCP", 00:15:35.401 "adrfam": "IPv4", 00:15:35.401 "traddr": "10.0.0.1", 00:15:35.401 "trsvcid": "54574" 00:15:35.401 }, 00:15:35.401 "auth": { 00:15:35.401 "state": "completed", 00:15:35.401 "digest": "sha384", 00:15:35.401 "dhgroup": "ffdhe3072" 00:15:35.401 } 00:15:35.401 } 00:15:35.401 ]' 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.401 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.661 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:35.661 20:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.640 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.898 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.899 20:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.465 00:15:37.465 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.465 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.465 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.723 { 00:15:37.723 "cntlid": 71, 00:15:37.723 "qid": 0, 00:15:37.723 "state": "enabled", 00:15:37.723 "thread": "nvmf_tgt_poll_group_000", 00:15:37.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:37.723 "listen_address": { 00:15:37.723 "trtype": "TCP", 00:15:37.723 "adrfam": "IPv4", 00:15:37.723 "traddr": "10.0.0.2", 00:15:37.723 "trsvcid": "4420" 00:15:37.723 }, 00:15:37.723 "peer_address": { 00:15:37.723 "trtype": "TCP", 00:15:37.723 "adrfam": "IPv4", 00:15:37.723 "traddr": "10.0.0.1", 00:15:37.723 "trsvcid": "54614" 00:15:37.723 }, 00:15:37.723 "auth": { 00:15:37.723 "state": "completed", 00:15:37.723 "digest": "sha384", 00:15:37.723 "dhgroup": "ffdhe3072" 00:15:37.723 } 00:15:37.723 } 00:15:37.723 ]' 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.723 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.982 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:37.982 20:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.915 20:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.481 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.482 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.482 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.482 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.482 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.482 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.739 00:15:39.739 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.740 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.740 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.998 { 00:15:39.998 "cntlid": 73, 00:15:39.998 "qid": 0, 00:15:39.998 "state": "enabled", 00:15:39.998 "thread": "nvmf_tgt_poll_group_000", 00:15:39.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:39.998 "listen_address": { 00:15:39.998 "trtype": "TCP", 00:15:39.998 "adrfam": "IPv4", 00:15:39.998 "traddr": "10.0.0.2", 00:15:39.998 "trsvcid": "4420" 00:15:39.998 }, 00:15:39.998 "peer_address": { 00:15:39.998 "trtype": "TCP", 00:15:39.998 "adrfam": "IPv4", 00:15:39.998 "traddr": "10.0.0.1", 00:15:39.998 "trsvcid": "54658" 00:15:39.998 }, 00:15:39.998 "auth": { 00:15:39.998 "state": "completed", 00:15:39.998 "digest": "sha384", 00:15:39.998 "dhgroup": "ffdhe4096" 00:15:39.998 } 00:15:39.998 } 00:15:39.998 ]' 00:15:39.998 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.256 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.256 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.256 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.256 20:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.256 20:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.256 20:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.256 20:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.515 20:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:40.515 20:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.449 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:42.015 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:42.015 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.016 20:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.274 00:15:42.274 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.274 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.274 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.532 { 00:15:42.532 "cntlid": 75, 00:15:42.532 "qid": 0, 00:15:42.532 "state": "enabled", 00:15:42.532 "thread": "nvmf_tgt_poll_group_000", 00:15:42.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:42.532 "listen_address": { 00:15:42.532 "trtype": "TCP", 00:15:42.532 "adrfam": "IPv4", 00:15:42.532 "traddr": "10.0.0.2", 00:15:42.532 "trsvcid": "4420" 00:15:42.532 }, 00:15:42.532 "peer_address": { 00:15:42.532 "trtype": "TCP", 00:15:42.532 "adrfam": "IPv4", 00:15:42.532 "traddr": "10.0.0.1", 00:15:42.532 "trsvcid": "54674" 00:15:42.532 }, 00:15:42.532 "auth": { 00:15:42.532 "state": "completed", 00:15:42.532 "digest": "sha384", 00:15:42.532 "dhgroup": "ffdhe4096" 00:15:42.532 } 00:15:42.532 } 00:15:42.532 ]' 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.532 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.790 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.790 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.790 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.048 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:43.048 20:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.981 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.239 20:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.239 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.239 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.239 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.239 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.497 00:15:44.497 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.497 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.497 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.755 { 00:15:44.755 "cntlid": 77, 00:15:44.755 "qid": 0, 00:15:44.755 "state": "enabled", 00:15:44.755 "thread": "nvmf_tgt_poll_group_000", 00:15:44.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:44.755 "listen_address": { 00:15:44.755 "trtype": "TCP", 00:15:44.755 "adrfam": "IPv4", 00:15:44.755 "traddr": "10.0.0.2", 00:15:44.755 "trsvcid": "4420" 00:15:44.755 }, 00:15:44.755 "peer_address": { 00:15:44.755 "trtype": "TCP", 00:15:44.755 "adrfam": "IPv4", 00:15:44.755 "traddr": "10.0.0.1", 00:15:44.755 "trsvcid": "44988" 00:15:44.755 }, 00:15:44.755 "auth": { 00:15:44.755 "state": "completed", 00:15:44.755 "digest": "sha384", 00:15:44.755 "dhgroup": "ffdhe4096" 00:15:44.755 } 00:15:44.755 } 00:15:44.755 ]' 00:15:44.755 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.014 20:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.272 20:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:45.272 20:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.205 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.463 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.029 00:15:47.029 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.029 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.029 20:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.286 { 00:15:47.286 "cntlid": 79, 00:15:47.286 "qid": 0, 00:15:47.286 "state": "enabled", 00:15:47.286 "thread": "nvmf_tgt_poll_group_000", 00:15:47.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:47.286 "listen_address": { 00:15:47.286 "trtype": "TCP", 00:15:47.286 "adrfam": "IPv4", 00:15:47.286 "traddr": "10.0.0.2", 00:15:47.286 "trsvcid": "4420" 00:15:47.286 }, 00:15:47.286 "peer_address": { 00:15:47.286 "trtype": "TCP", 00:15:47.286 "adrfam": "IPv4", 00:15:47.286 "traddr": "10.0.0.1", 00:15:47.286 "trsvcid": "45014" 00:15:47.286 }, 00:15:47.286 "auth": { 00:15:47.286 "state": "completed", 00:15:47.286 "digest": "sha384", 00:15:47.286 "dhgroup": "ffdhe4096" 00:15:47.286 } 00:15:47.286 } 00:15:47.286 ]' 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.286 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.543 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:47.543 20:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:48.476 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.734 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.994 20:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.560 00:15:49.560 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.560 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.560 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.817 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.817 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.817 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.818 { 00:15:49.818 "cntlid": 81, 00:15:49.818 "qid": 0, 00:15:49.818 "state": "enabled", 00:15:49.818 "thread": "nvmf_tgt_poll_group_000", 00:15:49.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:49.818 "listen_address": { 00:15:49.818 "trtype": "TCP", 00:15:49.818 "adrfam": "IPv4", 00:15:49.818 "traddr": "10.0.0.2", 00:15:49.818 "trsvcid": "4420" 00:15:49.818 }, 00:15:49.818 "peer_address": { 00:15:49.818 "trtype": "TCP", 00:15:49.818 "adrfam": "IPv4", 00:15:49.818 "traddr": "10.0.0.1", 00:15:49.818 "trsvcid": "45046" 00:15:49.818 }, 00:15:49.818 "auth": { 00:15:49.818 "state": "completed", 00:15:49.818 "digest": "sha384", 00:15:49.818 "dhgroup": "ffdhe6144" 00:15:49.818 } 00:15:49.818 } 00:15:49.818 ]' 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.818 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.075 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.075 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.075 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.075 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.075 20:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.333 20:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:50.333 20:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.267 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.525 20:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.091 00:15:52.091 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.091 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.091 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.657 { 00:15:52.657 "cntlid": 83, 00:15:52.657 "qid": 0, 00:15:52.657 "state": "enabled", 00:15:52.657 "thread": "nvmf_tgt_poll_group_000", 00:15:52.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:52.657 "listen_address": { 00:15:52.657 "trtype": "TCP", 00:15:52.657 "adrfam": "IPv4", 00:15:52.657 "traddr": "10.0.0.2", 00:15:52.657 "trsvcid": "4420" 00:15:52.657 }, 00:15:52.657 "peer_address": { 00:15:52.657 "trtype": "TCP", 00:15:52.657 "adrfam": "IPv4", 00:15:52.657 "traddr": "10.0.0.1", 00:15:52.657 "trsvcid": "45072" 00:15:52.657 }, 00:15:52.657 "auth": { 00:15:52.657 "state": "completed", 00:15:52.657 "digest": "sha384", 00:15:52.657 "dhgroup": "ffdhe6144" 00:15:52.657 } 00:15:52.657 } 00:15:52.657 ]' 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.657 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.915 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:52.915 20:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.849 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.107 20:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.673 00:15:54.673 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.673 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.673 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.931 { 00:15:54.931 "cntlid": 85, 00:15:54.931 "qid": 0, 00:15:54.931 "state": "enabled", 00:15:54.931 "thread": "nvmf_tgt_poll_group_000", 00:15:54.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:54.931 "listen_address": { 00:15:54.931 "trtype": "TCP", 00:15:54.931 "adrfam": "IPv4", 00:15:54.931 "traddr": "10.0.0.2", 00:15:54.931 "trsvcid": "4420" 00:15:54.931 }, 00:15:54.931 "peer_address": { 00:15:54.931 "trtype": "TCP", 00:15:54.931 "adrfam": "IPv4", 00:15:54.931 "traddr": "10.0.0.1", 00:15:54.931 "trsvcid": "40308" 00:15:54.931 }, 00:15:54.931 "auth": { 00:15:54.931 "state": "completed", 00:15:54.931 "digest": "sha384", 00:15:54.931 "dhgroup": "ffdhe6144" 00:15:54.931 } 00:15:54.931 } 00:15:54.931 ]' 00:15:54.931 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.188 20:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.445 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:55.445 20:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.379 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.943 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.509 00:15:57.509 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.509 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.509 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.766 { 00:15:57.766 "cntlid": 87, 00:15:57.766 "qid": 0, 00:15:57.766 "state": "enabled", 00:15:57.766 "thread": "nvmf_tgt_poll_group_000", 00:15:57.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:57.766 "listen_address": { 00:15:57.766 "trtype": "TCP", 00:15:57.766 "adrfam": "IPv4", 00:15:57.766 "traddr": "10.0.0.2", 00:15:57.766 "trsvcid": "4420" 00:15:57.766 }, 00:15:57.766 "peer_address": { 00:15:57.766 "trtype": "TCP", 00:15:57.766 "adrfam": "IPv4", 00:15:57.766 "traddr": "10.0.0.1", 00:15:57.766 "trsvcid": "40338" 00:15:57.766 }, 00:15:57.766 "auth": { 00:15:57.766 "state": "completed", 00:15:57.766 "digest": "sha384", 00:15:57.766 "dhgroup": "ffdhe6144" 00:15:57.766 } 00:15:57.766 } 00:15:57.766 ]' 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.766 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.767 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.767 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.767 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.767 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.024 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:58.025 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:15:58.957 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.957 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.957 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.958 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.522 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.454 00:16:00.454 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.454 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.454 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.712 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.713 { 00:16:00.713 "cntlid": 89, 00:16:00.713 "qid": 0, 00:16:00.713 "state": "enabled", 00:16:00.713 "thread": "nvmf_tgt_poll_group_000", 00:16:00.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:00.713 "listen_address": { 00:16:00.713 "trtype": "TCP", 00:16:00.713 "adrfam": "IPv4", 00:16:00.713 "traddr": "10.0.0.2", 00:16:00.713 "trsvcid": "4420" 00:16:00.713 }, 00:16:00.713 "peer_address": { 00:16:00.713 "trtype": "TCP", 00:16:00.713 "adrfam": "IPv4", 00:16:00.713 "traddr": "10.0.0.1", 00:16:00.713 "trsvcid": "40354" 00:16:00.713 }, 00:16:00.713 "auth": { 00:16:00.713 "state": "completed", 00:16:00.713 "digest": "sha384", 00:16:00.713 "dhgroup": "ffdhe8192" 00:16:00.713 } 00:16:00.713 } 00:16:00.713 ]' 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.713 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.970 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:00.970 20:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.340 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.340 20:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.271 00:16:03.271 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.271 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.271 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.528 { 00:16:03.528 "cntlid": 91, 00:16:03.528 "qid": 0, 00:16:03.528 "state": "enabled", 00:16:03.528 "thread": "nvmf_tgt_poll_group_000", 00:16:03.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.528 "listen_address": { 00:16:03.528 "trtype": "TCP", 00:16:03.528 "adrfam": "IPv4", 00:16:03.528 "traddr": "10.0.0.2", 00:16:03.528 "trsvcid": "4420" 00:16:03.528 }, 00:16:03.528 "peer_address": { 00:16:03.528 "trtype": "TCP", 00:16:03.528 "adrfam": "IPv4", 00:16:03.528 "traddr": "10.0.0.1", 00:16:03.528 "trsvcid": "46090" 00:16:03.528 }, 00:16:03.528 "auth": { 00:16:03.528 "state": "completed", 00:16:03.528 "digest": "sha384", 00:16:03.528 "dhgroup": "ffdhe8192" 00:16:03.528 } 00:16:03.528 } 00:16:03.528 ]' 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.528 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.785 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.785 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.785 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.042 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:04.042 20:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.975 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.233 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.172 00:16:06.172 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.172 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.172 20:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.460 { 00:16:06.460 "cntlid": 93, 00:16:06.460 "qid": 0, 00:16:06.460 "state": "enabled", 00:16:06.460 "thread": "nvmf_tgt_poll_group_000", 00:16:06.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:06.460 "listen_address": { 00:16:06.460 "trtype": "TCP", 00:16:06.460 "adrfam": "IPv4", 00:16:06.460 "traddr": "10.0.0.2", 00:16:06.460 "trsvcid": "4420" 00:16:06.460 }, 00:16:06.460 "peer_address": { 00:16:06.460 "trtype": "TCP", 00:16:06.460 "adrfam": "IPv4", 00:16:06.460 "traddr": "10.0.0.1", 00:16:06.460 "trsvcid": "46118" 00:16:06.460 }, 00:16:06.460 "auth": { 00:16:06.460 "state": "completed", 00:16:06.460 "digest": "sha384", 00:16:06.460 "dhgroup": "ffdhe8192" 00:16:06.460 } 00:16:06.460 } 00:16:06.460 ]' 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.460 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.741 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.741 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.741 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.998 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:06.998 20:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.932 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.190 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.135 00:16:09.135 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.135 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.135 20:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.393 { 00:16:09.393 "cntlid": 95, 00:16:09.393 "qid": 0, 00:16:09.393 "state": "enabled", 00:16:09.393 "thread": "nvmf_tgt_poll_group_000", 00:16:09.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.393 "listen_address": { 00:16:09.393 "trtype": "TCP", 00:16:09.393 "adrfam": "IPv4", 00:16:09.393 "traddr": "10.0.0.2", 00:16:09.393 "trsvcid": "4420" 00:16:09.393 }, 00:16:09.393 "peer_address": { 00:16:09.393 "trtype": "TCP", 00:16:09.393 "adrfam": "IPv4", 00:16:09.393 "traddr": "10.0.0.1", 00:16:09.393 "trsvcid": "46138" 00:16:09.393 }, 00:16:09.393 "auth": { 00:16:09.393 "state": "completed", 00:16:09.393 "digest": "sha384", 00:16:09.393 "dhgroup": "ffdhe8192" 00:16:09.393 } 00:16:09.393 } 00:16:09.393 ]' 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.393 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.651 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:09.651 20:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.586 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.844 20:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.413 00:16:11.413 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.413 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.413 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.672 { 00:16:11.672 "cntlid": 97, 00:16:11.672 "qid": 0, 00:16:11.672 "state": "enabled", 00:16:11.672 "thread": "nvmf_tgt_poll_group_000", 00:16:11.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:11.672 "listen_address": { 00:16:11.672 "trtype": "TCP", 00:16:11.672 "adrfam": "IPv4", 00:16:11.672 "traddr": "10.0.0.2", 00:16:11.672 "trsvcid": "4420" 00:16:11.672 }, 00:16:11.672 "peer_address": { 00:16:11.672 "trtype": "TCP", 00:16:11.672 "adrfam": "IPv4", 00:16:11.672 "traddr": "10.0.0.1", 00:16:11.672 "trsvcid": "46168" 00:16:11.672 }, 00:16:11.672 "auth": { 00:16:11.672 "state": "completed", 00:16:11.672 "digest": "sha512", 00:16:11.672 "dhgroup": "null" 00:16:11.672 } 00:16:11.672 } 00:16:11.672 ]' 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.672 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.932 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:11.932 20:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:12.920 20:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.179 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.748 00:16:13.748 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.748 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.748 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.007 { 00:16:14.007 "cntlid": 99, 00:16:14.007 "qid": 0, 00:16:14.007 "state": "enabled", 00:16:14.007 "thread": "nvmf_tgt_poll_group_000", 00:16:14.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.007 "listen_address": { 00:16:14.007 "trtype": "TCP", 00:16:14.007 "adrfam": "IPv4", 00:16:14.007 "traddr": "10.0.0.2", 00:16:14.007 "trsvcid": "4420" 00:16:14.007 }, 00:16:14.007 "peer_address": { 00:16:14.007 "trtype": "TCP", 00:16:14.007 "adrfam": "IPv4", 00:16:14.007 "traddr": "10.0.0.1", 00:16:14.007 "trsvcid": "36820" 00:16:14.007 }, 00:16:14.007 "auth": { 00:16:14.007 "state": "completed", 00:16:14.007 "digest": "sha512", 00:16:14.007 "dhgroup": "null" 00:16:14.007 } 00:16:14.007 } 00:16:14.007 ]' 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.007 20:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.265 20:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:14.265 20:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.201 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.460 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.027 00:16:16.027 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.027 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.027 20:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.286 { 00:16:16.286 "cntlid": 101, 00:16:16.286 "qid": 0, 00:16:16.286 "state": "enabled", 00:16:16.286 "thread": "nvmf_tgt_poll_group_000", 00:16:16.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.286 "listen_address": { 00:16:16.286 "trtype": "TCP", 00:16:16.286 "adrfam": "IPv4", 00:16:16.286 "traddr": "10.0.0.2", 00:16:16.286 "trsvcid": "4420" 00:16:16.286 }, 00:16:16.286 "peer_address": { 00:16:16.286 "trtype": "TCP", 00:16:16.286 "adrfam": "IPv4", 00:16:16.286 "traddr": "10.0.0.1", 00:16:16.286 "trsvcid": "36850" 00:16:16.286 }, 00:16:16.286 "auth": { 00:16:16.286 "state": "completed", 00:16:16.286 "digest": "sha512", 00:16:16.286 "dhgroup": "null" 00:16:16.286 } 00:16:16.286 } 00:16:16.286 ]' 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.286 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.545 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:16.545 20:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.479 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.480 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.480 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.047 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.048 20:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.306 00:16:18.306 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.306 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.306 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.564 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.564 { 00:16:18.564 "cntlid": 103, 00:16:18.564 "qid": 0, 00:16:18.564 "state": "enabled", 00:16:18.564 "thread": "nvmf_tgt_poll_group_000", 00:16:18.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.564 "listen_address": { 00:16:18.564 "trtype": "TCP", 00:16:18.564 "adrfam": "IPv4", 00:16:18.564 "traddr": "10.0.0.2", 00:16:18.564 "trsvcid": "4420" 00:16:18.564 }, 00:16:18.564 "peer_address": { 00:16:18.564 "trtype": "TCP", 00:16:18.565 "adrfam": "IPv4", 00:16:18.565 "traddr": "10.0.0.1", 00:16:18.565 "trsvcid": "36880" 00:16:18.565 }, 00:16:18.565 "auth": { 00:16:18.565 "state": "completed", 00:16:18.565 "digest": "sha512", 00:16:18.565 "dhgroup": "null" 00:16:18.565 } 00:16:18.565 } 00:16:18.565 ]' 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.565 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.825 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:18.825 20:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.203 20:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.203 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.460 00:16:20.460 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.460 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.460 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.719 { 00:16:20.719 "cntlid": 105, 00:16:20.719 "qid": 0, 00:16:20.719 "state": "enabled", 00:16:20.719 "thread": "nvmf_tgt_poll_group_000", 00:16:20.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:20.719 "listen_address": { 00:16:20.719 "trtype": "TCP", 00:16:20.719 "adrfam": "IPv4", 00:16:20.719 "traddr": "10.0.0.2", 00:16:20.719 "trsvcid": "4420" 00:16:20.719 }, 00:16:20.719 "peer_address": { 00:16:20.719 "trtype": "TCP", 00:16:20.719 "adrfam": "IPv4", 00:16:20.719 "traddr": "10.0.0.1", 00:16:20.719 "trsvcid": "36900" 00:16:20.719 }, 00:16:20.719 "auth": { 00:16:20.719 "state": "completed", 00:16:20.719 "digest": "sha512", 00:16:20.719 "dhgroup": "ffdhe2048" 00:16:20.719 } 00:16:20.719 } 00:16:20.719 ]' 00:16:20.719 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.977 20:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.237 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:21.237 20:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.174 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.432 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.000 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.000 { 00:16:23.000 "cntlid": 107, 00:16:23.000 "qid": 0, 00:16:23.000 "state": "enabled", 00:16:23.000 "thread": "nvmf_tgt_poll_group_000", 00:16:23.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.000 "listen_address": { 00:16:23.000 "trtype": "TCP", 00:16:23.000 "adrfam": "IPv4", 00:16:23.000 "traddr": "10.0.0.2", 00:16:23.000 "trsvcid": "4420" 00:16:23.000 }, 00:16:23.000 "peer_address": { 00:16:23.000 "trtype": "TCP", 00:16:23.000 "adrfam": "IPv4", 00:16:23.000 "traddr": "10.0.0.1", 00:16:23.000 "trsvcid": "54930" 00:16:23.000 }, 00:16:23.000 "auth": { 00:16:23.000 "state": "completed", 00:16:23.000 "digest": "sha512", 00:16:23.000 "dhgroup": "ffdhe2048" 00:16:23.000 } 00:16:23.000 } 00:16:23.000 ]' 00:16:23.000 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.259 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.259 20:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.259 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.259 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.259 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.259 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.259 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.517 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:23.517 20:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.456 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.714 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.282 00:16:25.282 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.282 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.282 20:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.282 { 00:16:25.282 "cntlid": 109, 00:16:25.282 "qid": 0, 00:16:25.282 "state": "enabled", 00:16:25.282 "thread": "nvmf_tgt_poll_group_000", 00:16:25.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:25.282 "listen_address": { 00:16:25.282 "trtype": "TCP", 00:16:25.282 "adrfam": "IPv4", 00:16:25.282 "traddr": "10.0.0.2", 00:16:25.282 "trsvcid": "4420" 00:16:25.282 }, 00:16:25.282 "peer_address": { 00:16:25.282 "trtype": "TCP", 00:16:25.282 "adrfam": "IPv4", 00:16:25.282 "traddr": "10.0.0.1", 00:16:25.282 "trsvcid": "54946" 00:16:25.282 }, 00:16:25.282 "auth": { 00:16:25.282 "state": "completed", 00:16:25.282 "digest": "sha512", 00:16:25.282 "dhgroup": "ffdhe2048" 00:16:25.282 } 00:16:25.282 } 00:16:25.282 ]' 00:16:25.282 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.540 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.799 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:25.799 20:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.737 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.995 20:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.253 00:16:27.253 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.253 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.253 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.819 { 00:16:27.819 "cntlid": 111, 00:16:27.819 "qid": 0, 00:16:27.819 "state": "enabled", 00:16:27.819 "thread": "nvmf_tgt_poll_group_000", 00:16:27.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:27.819 "listen_address": { 00:16:27.819 "trtype": "TCP", 00:16:27.819 "adrfam": "IPv4", 00:16:27.819 "traddr": "10.0.0.2", 00:16:27.819 "trsvcid": "4420" 00:16:27.819 }, 00:16:27.819 "peer_address": { 00:16:27.819 "trtype": "TCP", 00:16:27.819 "adrfam": "IPv4", 00:16:27.819 "traddr": "10.0.0.1", 00:16:27.819 "trsvcid": "54976" 00:16:27.819 }, 00:16:27.819 "auth": { 00:16:27.819 "state": "completed", 00:16:27.819 "digest": "sha512", 00:16:27.819 "dhgroup": "ffdhe2048" 00:16:27.819 } 00:16:27.819 } 00:16:27.819 ]' 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.819 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.076 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:28.076 20:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.041 20:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.302 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.870 00:16:29.870 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.870 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.870 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.129 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.129 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.129 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.129 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.130 { 00:16:30.130 "cntlid": 113, 00:16:30.130 "qid": 0, 00:16:30.130 "state": "enabled", 00:16:30.130 "thread": "nvmf_tgt_poll_group_000", 00:16:30.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:30.130 "listen_address": { 00:16:30.130 "trtype": "TCP", 00:16:30.130 "adrfam": "IPv4", 00:16:30.130 "traddr": "10.0.0.2", 00:16:30.130 "trsvcid": "4420" 00:16:30.130 }, 00:16:30.130 "peer_address": { 00:16:30.130 "trtype": "TCP", 00:16:30.130 "adrfam": "IPv4", 00:16:30.130 "traddr": "10.0.0.1", 00:16:30.130 "trsvcid": "54996" 00:16:30.130 }, 00:16:30.130 "auth": { 00:16:30.130 "state": "completed", 00:16:30.130 "digest": "sha512", 00:16:30.130 "dhgroup": "ffdhe3072" 00:16:30.130 } 00:16:30.130 } 00:16:30.130 ]' 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.130 20:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.390 20:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:30.390 20:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:31.329 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.330 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.588 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.156 00:16:32.156 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.156 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.156 20:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.414 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.414 { 00:16:32.414 "cntlid": 115, 00:16:32.414 "qid": 0, 00:16:32.414 "state": "enabled", 00:16:32.414 "thread": "nvmf_tgt_poll_group_000", 00:16:32.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.414 "listen_address": { 00:16:32.414 "trtype": "TCP", 00:16:32.414 "adrfam": "IPv4", 00:16:32.414 "traddr": "10.0.0.2", 00:16:32.414 "trsvcid": "4420" 00:16:32.414 }, 00:16:32.414 "peer_address": { 00:16:32.414 "trtype": "TCP", 00:16:32.414 "adrfam": "IPv4", 00:16:32.414 "traddr": "10.0.0.1", 00:16:32.415 "trsvcid": "55022" 00:16:32.415 }, 00:16:32.415 "auth": { 00:16:32.415 "state": "completed", 00:16:32.415 "digest": "sha512", 00:16:32.415 "dhgroup": "ffdhe3072" 00:16:32.415 } 00:16:32.415 } 00:16:32.415 ]' 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.415 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.673 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:32.673 20:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:33.610 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.610 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.610 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.611 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.611 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.611 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.611 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.611 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.869 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.128 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.128 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.128 20:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.386 00:16:34.386 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.386 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.386 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.645 { 00:16:34.645 "cntlid": 117, 00:16:34.645 "qid": 0, 00:16:34.645 "state": "enabled", 00:16:34.645 "thread": "nvmf_tgt_poll_group_000", 00:16:34.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.645 "listen_address": { 00:16:34.645 "trtype": "TCP", 00:16:34.645 "adrfam": "IPv4", 00:16:34.645 "traddr": "10.0.0.2", 00:16:34.645 "trsvcid": "4420" 00:16:34.645 }, 00:16:34.645 "peer_address": { 00:16:34.645 "trtype": "TCP", 00:16:34.645 "adrfam": "IPv4", 00:16:34.645 "traddr": "10.0.0.1", 00:16:34.645 "trsvcid": "44434" 00:16:34.645 }, 00:16:34.645 "auth": { 00:16:34.645 "state": "completed", 00:16:34.645 "digest": "sha512", 00:16:34.645 "dhgroup": "ffdhe3072" 00:16:34.645 } 00:16:34.645 } 00:16:34.645 ]' 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.645 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.214 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:35.214 20:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.154 20:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.413 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.671 00:16:36.671 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.671 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.671 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.929 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.930 { 00:16:36.930 "cntlid": 119, 00:16:36.930 "qid": 0, 00:16:36.930 "state": "enabled", 00:16:36.930 "thread": "nvmf_tgt_poll_group_000", 00:16:36.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:36.930 "listen_address": { 00:16:36.930 "trtype": "TCP", 00:16:36.930 "adrfam": "IPv4", 00:16:36.930 "traddr": "10.0.0.2", 00:16:36.930 "trsvcid": "4420" 00:16:36.930 }, 00:16:36.930 "peer_address": { 00:16:36.930 "trtype": "TCP", 00:16:36.930 "adrfam": "IPv4", 00:16:36.930 "traddr": "10.0.0.1", 00:16:36.930 "trsvcid": "44454" 00:16:36.930 }, 00:16:36.930 "auth": { 00:16:36.930 "state": "completed", 00:16:36.930 "digest": "sha512", 00:16:36.930 "dhgroup": "ffdhe3072" 00:16:36.930 } 00:16:36.930 } 00:16:36.930 ]' 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.930 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.187 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.188 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.188 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.188 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.188 20:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.447 20:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:37.447 20:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.383 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.640 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.641 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.641 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.641 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.641 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.641 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.899 00:16:39.157 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.157 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.157 20:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.415 { 00:16:39.415 "cntlid": 121, 00:16:39.415 "qid": 0, 00:16:39.415 "state": "enabled", 00:16:39.415 "thread": "nvmf_tgt_poll_group_000", 00:16:39.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:39.415 "listen_address": { 00:16:39.415 "trtype": "TCP", 00:16:39.415 "adrfam": "IPv4", 00:16:39.415 "traddr": "10.0.0.2", 00:16:39.415 "trsvcid": "4420" 00:16:39.415 }, 00:16:39.415 "peer_address": { 00:16:39.415 "trtype": "TCP", 00:16:39.415 "adrfam": "IPv4", 00:16:39.415 "traddr": "10.0.0.1", 00:16:39.415 "trsvcid": "44476" 00:16:39.415 }, 00:16:39.415 "auth": { 00:16:39.415 "state": "completed", 00:16:39.415 "digest": "sha512", 00:16:39.415 "dhgroup": "ffdhe4096" 00:16:39.415 } 00:16:39.415 } 00:16:39.415 ]' 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.415 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.675 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:39.675 20:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.609 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.867 20:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.434 00:16:41.434 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.434 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.434 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.692 { 00:16:41.692 "cntlid": 123, 00:16:41.692 "qid": 0, 00:16:41.692 "state": "enabled", 00:16:41.692 "thread": "nvmf_tgt_poll_group_000", 00:16:41.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:41.692 "listen_address": { 00:16:41.692 "trtype": "TCP", 00:16:41.692 "adrfam": "IPv4", 00:16:41.692 "traddr": "10.0.0.2", 00:16:41.692 "trsvcid": "4420" 00:16:41.692 }, 00:16:41.692 "peer_address": { 00:16:41.692 "trtype": "TCP", 00:16:41.692 "adrfam": "IPv4", 00:16:41.692 "traddr": "10.0.0.1", 00:16:41.692 "trsvcid": "44484" 00:16:41.692 }, 00:16:41.692 "auth": { 00:16:41.692 "state": "completed", 00:16:41.692 "digest": "sha512", 00:16:41.692 "dhgroup": "ffdhe4096" 00:16:41.692 } 00:16:41.692 } 00:16:41.692 ]' 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.692 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.950 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.950 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.951 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.209 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:42.209 20:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:43.145 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.145 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.145 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.145 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.146 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.146 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.146 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.146 20:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.404 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.405 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.405 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.971 00:16:43.971 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.971 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.971 20:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.230 { 00:16:44.230 "cntlid": 125, 00:16:44.230 "qid": 0, 00:16:44.230 "state": "enabled", 00:16:44.230 "thread": "nvmf_tgt_poll_group_000", 00:16:44.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:44.230 "listen_address": { 00:16:44.230 "trtype": "TCP", 00:16:44.230 "adrfam": "IPv4", 00:16:44.230 "traddr": "10.0.0.2", 00:16:44.230 "trsvcid": "4420" 00:16:44.230 }, 00:16:44.230 "peer_address": { 00:16:44.230 "trtype": "TCP", 00:16:44.230 "adrfam": "IPv4", 00:16:44.230 "traddr": "10.0.0.1", 00:16:44.230 "trsvcid": "41556" 00:16:44.230 }, 00:16:44.230 "auth": { 00:16:44.230 "state": "completed", 00:16:44.230 "digest": "sha512", 00:16:44.230 "dhgroup": "ffdhe4096" 00:16:44.230 } 00:16:44.230 } 00:16:44.230 ]' 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.230 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.799 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:44.799 20:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.736 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.994 20:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.252 00:16:46.511 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.511 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.511 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.770 { 00:16:46.770 "cntlid": 127, 00:16:46.770 "qid": 0, 00:16:46.770 "state": "enabled", 00:16:46.770 "thread": "nvmf_tgt_poll_group_000", 00:16:46.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:46.770 "listen_address": { 00:16:46.770 "trtype": "TCP", 00:16:46.770 "adrfam": "IPv4", 00:16:46.770 "traddr": "10.0.0.2", 00:16:46.770 "trsvcid": "4420" 00:16:46.770 }, 00:16:46.770 "peer_address": { 00:16:46.770 "trtype": "TCP", 00:16:46.770 "adrfam": "IPv4", 00:16:46.770 "traddr": "10.0.0.1", 00:16:46.770 "trsvcid": "41594" 00:16:46.770 }, 00:16:46.770 "auth": { 00:16:46.770 "state": "completed", 00:16:46.770 "digest": "sha512", 00:16:46.770 "dhgroup": "ffdhe4096" 00:16:46.770 } 00:16:46.770 } 00:16:46.770 ]' 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.770 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.030 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:47.030 20:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.408 20:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.408 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.976 00:16:48.976 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.976 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.976 20:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.235 { 00:16:49.235 "cntlid": 129, 00:16:49.235 "qid": 0, 00:16:49.235 "state": "enabled", 00:16:49.235 "thread": "nvmf_tgt_poll_group_000", 00:16:49.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.235 "listen_address": { 00:16:49.235 "trtype": "TCP", 00:16:49.235 "adrfam": "IPv4", 00:16:49.235 "traddr": "10.0.0.2", 00:16:49.235 "trsvcid": "4420" 00:16:49.235 }, 00:16:49.235 "peer_address": { 00:16:49.235 "trtype": "TCP", 00:16:49.235 "adrfam": "IPv4", 00:16:49.235 "traddr": "10.0.0.1", 00:16:49.235 "trsvcid": "41620" 00:16:49.235 }, 00:16:49.235 "auth": { 00:16:49.235 "state": "completed", 00:16:49.235 "digest": "sha512", 00:16:49.235 "dhgroup": "ffdhe6144" 00:16:49.235 } 00:16:49.235 } 00:16:49.235 ]' 00:16:49.235 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.494 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.752 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:49.752 20:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.690 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.949 20:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.516 00:16:51.516 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.516 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.516 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.775 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.775 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.775 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.775 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.033 { 00:16:52.033 "cntlid": 131, 00:16:52.033 "qid": 0, 00:16:52.033 "state": "enabled", 00:16:52.033 "thread": "nvmf_tgt_poll_group_000", 00:16:52.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.033 "listen_address": { 00:16:52.033 "trtype": "TCP", 00:16:52.033 "adrfam": "IPv4", 00:16:52.033 "traddr": "10.0.0.2", 00:16:52.033 "trsvcid": "4420" 00:16:52.033 }, 00:16:52.033 "peer_address": { 00:16:52.033 "trtype": "TCP", 00:16:52.033 "adrfam": "IPv4", 00:16:52.033 "traddr": "10.0.0.1", 00:16:52.033 "trsvcid": "41652" 00:16:52.033 }, 00:16:52.033 "auth": { 00:16:52.033 "state": "completed", 00:16:52.033 "digest": "sha512", 00:16:52.033 "dhgroup": "ffdhe6144" 00:16:52.033 } 00:16:52.033 } 00:16:52.033 ]' 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.033 20:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.291 20:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:52.291 20:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.224 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.480 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.051 00:16:54.051 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.051 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.051 20:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.620 { 00:16:54.620 "cntlid": 133, 00:16:54.620 "qid": 0, 00:16:54.620 "state": "enabled", 00:16:54.620 "thread": "nvmf_tgt_poll_group_000", 00:16:54.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.620 "listen_address": { 00:16:54.620 "trtype": "TCP", 00:16:54.620 "adrfam": "IPv4", 00:16:54.620 "traddr": "10.0.0.2", 00:16:54.620 "trsvcid": "4420" 00:16:54.620 }, 00:16:54.620 "peer_address": { 00:16:54.620 "trtype": "TCP", 00:16:54.620 "adrfam": "IPv4", 00:16:54.620 "traddr": "10.0.0.1", 00:16:54.620 "trsvcid": "49708" 00:16:54.620 }, 00:16:54.620 "auth": { 00:16:54.620 "state": "completed", 00:16:54.620 "digest": "sha512", 00:16:54.620 "dhgroup": "ffdhe6144" 00:16:54.620 } 00:16:54.620 } 00:16:54.620 ]' 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.620 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.621 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.915 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:54.915 20:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.849 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.108 20:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.676 00:16:56.676 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.676 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.676 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.936 { 00:16:56.936 "cntlid": 135, 00:16:56.936 "qid": 0, 00:16:56.936 "state": "enabled", 00:16:56.936 "thread": "nvmf_tgt_poll_group_000", 00:16:56.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.936 "listen_address": { 00:16:56.936 "trtype": "TCP", 00:16:56.936 "adrfam": "IPv4", 00:16:56.936 "traddr": "10.0.0.2", 00:16:56.936 "trsvcid": "4420" 00:16:56.936 }, 00:16:56.936 "peer_address": { 00:16:56.936 "trtype": "TCP", 00:16:56.936 "adrfam": "IPv4", 00:16:56.936 "traddr": "10.0.0.1", 00:16:56.936 "trsvcid": "49742" 00:16:56.936 }, 00:16:56.936 "auth": { 00:16:56.936 "state": "completed", 00:16:56.936 "digest": "sha512", 00:16:56.936 "dhgroup": "ffdhe6144" 00:16:56.936 } 00:16:56.936 } 00:16:56.936 ]' 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.936 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.194 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.194 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.194 20:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.453 20:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:57.453 20:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.389 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.648 20:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.583 00:16:59.583 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.583 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.583 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.842 { 00:16:59.842 "cntlid": 137, 00:16:59.842 "qid": 0, 00:16:59.842 "state": "enabled", 00:16:59.842 "thread": "nvmf_tgt_poll_group_000", 00:16:59.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.842 "listen_address": { 00:16:59.842 "trtype": "TCP", 00:16:59.842 "adrfam": "IPv4", 00:16:59.842 "traddr": "10.0.0.2", 00:16:59.842 "trsvcid": "4420" 00:16:59.842 }, 00:16:59.842 "peer_address": { 00:16:59.842 "trtype": "TCP", 00:16:59.842 "adrfam": "IPv4", 00:16:59.842 "traddr": "10.0.0.1", 00:16:59.842 "trsvcid": "49776" 00:16:59.842 }, 00:16:59.842 "auth": { 00:16:59.842 "state": "completed", 00:16:59.842 "digest": "sha512", 00:16:59.842 "dhgroup": "ffdhe8192" 00:16:59.842 } 00:16:59.842 } 00:16:59.842 ]' 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.842 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.101 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.101 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.101 20:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.360 20:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:17:00.361 20:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.296 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.556 20:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.492 00:17:02.492 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.492 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.492 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.751 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.751 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.752 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.752 { 00:17:02.752 "cntlid": 139, 00:17:02.752 "qid": 0, 00:17:02.752 "state": "enabled", 00:17:02.752 "thread": "nvmf_tgt_poll_group_000", 00:17:02.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.752 "listen_address": { 00:17:02.752 "trtype": "TCP", 00:17:02.752 "adrfam": "IPv4", 00:17:02.752 "traddr": "10.0.0.2", 00:17:02.752 "trsvcid": "4420" 00:17:02.752 }, 00:17:02.752 "peer_address": { 00:17:02.752 "trtype": "TCP", 00:17:02.752 "adrfam": "IPv4", 00:17:02.752 "traddr": "10.0.0.1", 00:17:02.752 "trsvcid": "49790" 00:17:02.752 }, 00:17:02.752 "auth": { 00:17:02.752 "state": "completed", 00:17:02.752 "digest": "sha512", 00:17:02.752 "dhgroup": "ffdhe8192" 00:17:02.752 } 00:17:02.752 } 00:17:02.752 ]' 00:17:02.752 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.010 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.269 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:17:03.269 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: --dhchap-ctrl-secret DHHC-1:02:MzE1OGRmZjA3MTUyMDYzZWM1YTU2MWJmN2JiMzlhYTE0MmQ4NjFlNmI2ODFiNmJkiwp7Tg==: 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.203 20:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.462 20:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.398 00:17:05.398 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.398 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.398 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.657 { 00:17:05.657 "cntlid": 141, 00:17:05.657 "qid": 0, 00:17:05.657 "state": "enabled", 00:17:05.657 "thread": "nvmf_tgt_poll_group_000", 00:17:05.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:05.657 "listen_address": { 00:17:05.657 "trtype": "TCP", 00:17:05.657 "adrfam": "IPv4", 00:17:05.657 "traddr": "10.0.0.2", 00:17:05.657 "trsvcid": "4420" 00:17:05.657 }, 00:17:05.657 "peer_address": { 00:17:05.657 "trtype": "TCP", 00:17:05.657 "adrfam": "IPv4", 00:17:05.657 "traddr": "10.0.0.1", 00:17:05.657 "trsvcid": "55980" 00:17:05.657 }, 00:17:05.657 "auth": { 00:17:05.657 "state": "completed", 00:17:05.657 "digest": "sha512", 00:17:05.657 "dhgroup": "ffdhe8192" 00:17:05.657 } 00:17:05.657 } 00:17:05.657 ]' 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.657 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.916 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.916 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.916 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.174 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:17:06.174 20:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:01:YTI4MTk4MzlhMWY4NDU4OWNkYWNiZDRkNmZjZTI0YmEagh6r: 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.109 20:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.368 20:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.304 00:17:08.304 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.304 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.304 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.563 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.563 { 00:17:08.563 "cntlid": 143, 00:17:08.563 "qid": 0, 00:17:08.563 "state": "enabled", 00:17:08.563 "thread": "nvmf_tgt_poll_group_000", 00:17:08.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:08.563 "listen_address": { 00:17:08.563 "trtype": "TCP", 00:17:08.563 "adrfam": "IPv4", 00:17:08.563 "traddr": "10.0.0.2", 00:17:08.563 "trsvcid": "4420" 00:17:08.563 }, 00:17:08.563 "peer_address": { 00:17:08.563 "trtype": "TCP", 00:17:08.563 "adrfam": "IPv4", 00:17:08.563 "traddr": "10.0.0.1", 00:17:08.563 "trsvcid": "55996" 00:17:08.563 }, 00:17:08.563 "auth": { 00:17:08.564 "state": "completed", 00:17:08.564 "digest": "sha512", 00:17:08.564 "dhgroup": "ffdhe8192" 00:17:08.564 } 00:17:08.564 } 00:17:08.564 ]' 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.564 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.130 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:09.130 20:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.064 20:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.323 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:10.323 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.324 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.260 00:17:11.260 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.260 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.260 20:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.519 { 00:17:11.519 "cntlid": 145, 00:17:11.519 "qid": 0, 00:17:11.519 "state": "enabled", 00:17:11.519 "thread": "nvmf_tgt_poll_group_000", 00:17:11.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.519 "listen_address": { 00:17:11.519 "trtype": "TCP", 00:17:11.519 "adrfam": "IPv4", 00:17:11.519 "traddr": "10.0.0.2", 00:17:11.519 "trsvcid": "4420" 00:17:11.519 }, 00:17:11.519 "peer_address": { 00:17:11.519 "trtype": "TCP", 00:17:11.519 "adrfam": "IPv4", 00:17:11.519 "traddr": "10.0.0.1", 00:17:11.519 "trsvcid": "56016" 00:17:11.519 }, 00:17:11.519 "auth": { 00:17:11.519 "state": "completed", 00:17:11.519 "digest": "sha512", 00:17:11.519 "dhgroup": "ffdhe8192" 00:17:11.519 } 00:17:11.519 } 00:17:11.519 ]' 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.519 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.777 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:17:11.777 20:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTAxYTlmZDUyMzdmMTE4MmVkMDM3MmRhY2UzMTdiNDBhM2U3Nzg5YTdhYTMzMTg5pxnC4w==: --dhchap-ctrl-secret DHHC-1:03:NzcxNjg2YzZjNzIwYjQ1MDE5ZDc0ZWYxNzFjYjQ3MTRkNjMzNTcxYWM5NzY1YTg3OGFkYzQzZjcxNTMwNzMwMh0q/go=: 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.713 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:12.972 20:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:13.909 request: 00:17:13.909 { 00:17:13.909 "name": "nvme0", 00:17:13.909 "trtype": "tcp", 00:17:13.909 "traddr": "10.0.0.2", 00:17:13.909 "adrfam": "ipv4", 00:17:13.909 "trsvcid": "4420", 00:17:13.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:13.909 "prchk_reftag": false, 00:17:13.910 "prchk_guard": false, 00:17:13.910 "hdgst": false, 00:17:13.910 "ddgst": false, 00:17:13.910 "dhchap_key": "key2", 00:17:13.910 "allow_unrecognized_csi": false, 00:17:13.910 "method": "bdev_nvme_attach_controller", 00:17:13.910 "req_id": 1 00:17:13.910 } 00:17:13.910 Got JSON-RPC error response 00:17:13.910 response: 00:17:13.910 { 00:17:13.910 "code": -5, 00:17:13.910 "message": "Input/output error" 00:17:13.910 } 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.910 20:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:14.477 request: 00:17:14.477 { 00:17:14.477 "name": "nvme0", 00:17:14.477 "trtype": "tcp", 00:17:14.477 "traddr": "10.0.0.2", 00:17:14.477 "adrfam": "ipv4", 00:17:14.477 "trsvcid": "4420", 00:17:14.477 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:14.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:14.477 "prchk_reftag": false, 00:17:14.477 "prchk_guard": false, 00:17:14.477 "hdgst": false, 00:17:14.477 "ddgst": false, 00:17:14.477 "dhchap_key": "key1", 00:17:14.477 "dhchap_ctrlr_key": "ckey2", 00:17:14.477 "allow_unrecognized_csi": false, 00:17:14.477 "method": "bdev_nvme_attach_controller", 00:17:14.477 "req_id": 1 00:17:14.477 } 00:17:14.477 Got JSON-RPC error response 00:17:14.477 response: 00:17:14.477 { 00:17:14.477 "code": -5, 00:17:14.477 "message": "Input/output error" 00:17:14.477 } 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.477 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.736 20:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.673 request: 00:17:15.673 { 00:17:15.673 "name": "nvme0", 00:17:15.673 "trtype": "tcp", 00:17:15.673 "traddr": "10.0.0.2", 00:17:15.673 "adrfam": "ipv4", 00:17:15.673 "trsvcid": "4420", 00:17:15.673 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.673 "prchk_reftag": false, 00:17:15.673 "prchk_guard": false, 00:17:15.673 "hdgst": false, 00:17:15.673 "ddgst": false, 00:17:15.673 "dhchap_key": "key1", 00:17:15.673 "dhchap_ctrlr_key": "ckey1", 00:17:15.673 "allow_unrecognized_csi": false, 00:17:15.673 "method": "bdev_nvme_attach_controller", 00:17:15.673 "req_id": 1 00:17:15.673 } 00:17:15.673 Got JSON-RPC error response 00:17:15.673 response: 00:17:15.673 { 00:17:15.673 "code": -5, 00:17:15.673 "message": "Input/output error" 00:17:15.673 } 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.673 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3957733 ']' 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3957733' 00:17:15.674 killing process with pid 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3957733 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3981297 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3981297 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3981297 ']' 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.674 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.933 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.933 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:15.933 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.933 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.933 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3981297 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3981297 ']' 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.192 20:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 null0 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Hpq 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.kTT ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kTT 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aEs 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.5DG ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DG 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i9Z 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.IJN ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IJN 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2fO 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.452 20:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.361 nvme0n1 00:17:18.361 20:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.361 20:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.361 20:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.361 { 00:17:18.361 "cntlid": 1, 00:17:18.361 "qid": 0, 00:17:18.361 "state": "enabled", 00:17:18.361 "thread": "nvmf_tgt_poll_group_000", 00:17:18.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:18.361 "listen_address": { 00:17:18.361 "trtype": "TCP", 00:17:18.361 "adrfam": "IPv4", 00:17:18.361 "traddr": "10.0.0.2", 00:17:18.361 "trsvcid": "4420" 00:17:18.361 }, 00:17:18.361 "peer_address": { 00:17:18.361 "trtype": "TCP", 00:17:18.361 "adrfam": "IPv4", 00:17:18.361 "traddr": "10.0.0.1", 00:17:18.361 "trsvcid": "47966" 00:17:18.361 }, 00:17:18.361 "auth": { 00:17:18.361 "state": "completed", 00:17:18.361 "digest": "sha512", 00:17:18.361 "dhgroup": "ffdhe8192" 00:17:18.361 } 00:17:18.361 } 00:17:18.361 ]' 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.361 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.362 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.362 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.362 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.620 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:18.620 20:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:19.556 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:19.814 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.072 20:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.339 request: 00:17:20.339 { 00:17:20.339 "name": "nvme0", 00:17:20.339 "trtype": "tcp", 00:17:20.339 "traddr": "10.0.0.2", 00:17:20.339 "adrfam": "ipv4", 00:17:20.339 "trsvcid": "4420", 00:17:20.339 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:20.339 "prchk_reftag": false, 00:17:20.339 "prchk_guard": false, 00:17:20.339 "hdgst": false, 00:17:20.339 "ddgst": false, 00:17:20.339 "dhchap_key": "key3", 00:17:20.339 "allow_unrecognized_csi": false, 00:17:20.339 "method": "bdev_nvme_attach_controller", 00:17:20.339 "req_id": 1 00:17:20.339 } 00:17:20.339 Got JSON-RPC error response 00:17:20.339 response: 00:17:20.339 { 00:17:20.339 "code": -5, 00:17:20.339 "message": "Input/output error" 00:17:20.339 } 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:20.339 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.662 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.921 request: 00:17:20.921 { 00:17:20.921 "name": "nvme0", 00:17:20.921 "trtype": "tcp", 00:17:20.921 "traddr": "10.0.0.2", 00:17:20.921 "adrfam": "ipv4", 00:17:20.921 "trsvcid": "4420", 00:17:20.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:20.921 "prchk_reftag": false, 00:17:20.921 "prchk_guard": false, 00:17:20.921 "hdgst": false, 00:17:20.921 "ddgst": false, 00:17:20.921 "dhchap_key": "key3", 00:17:20.921 "allow_unrecognized_csi": false, 00:17:20.921 "method": "bdev_nvme_attach_controller", 00:17:20.921 "req_id": 1 00:17:20.921 } 00:17:20.921 Got JSON-RPC error response 00:17:20.921 response: 00:17:20.921 { 00:17:20.921 "code": -5, 00:17:20.921 "message": "Input/output error" 00:17:20.921 } 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.921 20:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:21.179 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.179 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.179 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.179 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.180 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.745 request: 00:17:21.745 { 00:17:21.745 "name": "nvme0", 00:17:21.745 "trtype": "tcp", 00:17:21.745 "traddr": "10.0.0.2", 00:17:21.745 "adrfam": "ipv4", 00:17:21.745 "trsvcid": "4420", 00:17:21.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.745 "prchk_reftag": false, 00:17:21.745 "prchk_guard": false, 00:17:21.745 "hdgst": false, 00:17:21.745 "ddgst": false, 00:17:21.745 "dhchap_key": "key0", 00:17:21.745 "dhchap_ctrlr_key": "key1", 00:17:21.745 "allow_unrecognized_csi": false, 00:17:21.745 "method": "bdev_nvme_attach_controller", 00:17:21.745 "req_id": 1 00:17:21.745 } 00:17:21.745 Got JSON-RPC error response 00:17:21.745 response: 00:17:21.745 { 00:17:21.745 "code": -5, 00:17:21.745 "message": "Input/output error" 00:17:21.745 } 00:17:22.003 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:22.003 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.004 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.004 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.004 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:22.004 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:22.004 20:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:22.262 nvme0n1 00:17:22.262 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:22.262 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:22.262 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.520 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.520 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.520 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:22.779 20:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:24.153 nvme0n1 00:17:24.153 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:24.153 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:24.153 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.412 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.412 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.412 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.412 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.670 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.670 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:24.670 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:24.670 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.928 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.928 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:24.928 20:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: --dhchap-ctrl-secret DHHC-1:03:MDIyZjJlNzY4OWNhNjQwZjVmMzk2Y2FiZmIyMzI5ZTQwNjgxZTA4ZmE2OGQxN2ZiOTQ2OTQyMDNiYjlhZGE2MgIZE5s=: 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.862 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:26.120 20:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:27.053 request: 00:17:27.053 { 00:17:27.053 "name": "nvme0", 00:17:27.053 "trtype": "tcp", 00:17:27.053 "traddr": "10.0.0.2", 00:17:27.053 "adrfam": "ipv4", 00:17:27.053 "trsvcid": "4420", 00:17:27.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:27.053 "prchk_reftag": false, 00:17:27.053 "prchk_guard": false, 00:17:27.053 "hdgst": false, 00:17:27.053 "ddgst": false, 00:17:27.053 "dhchap_key": "key1", 00:17:27.053 "allow_unrecognized_csi": false, 00:17:27.053 "method": "bdev_nvme_attach_controller", 00:17:27.053 "req_id": 1 00:17:27.053 } 00:17:27.053 Got JSON-RPC error response 00:17:27.053 response: 00:17:27.053 { 00:17:27.053 "code": -5, 00:17:27.053 "message": "Input/output error" 00:17:27.053 } 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.053 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.054 20:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:28.442 nvme0n1 00:17:28.442 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:28.442 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:28.442 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.702 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.702 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.702 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:28.960 20:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:29.218 nvme0n1 00:17:29.218 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:29.218 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:29.218 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.783 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.783 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.783 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: '' 2s 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: ]] 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWIxZDY5NzgyZDk4M2VlNzIzYzJiNmQzMjdjNDRkNWP7QUfO: 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:30.040 20:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: 2s 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: ]] 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzIyYjE1YzkzMGNmNTlkNDZkMzdmOGRjNjA4YWM4NGI1ZTZiZjAxMDM1YjM0NmE1uNn0sA==: 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:31.942 20:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.471 20:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.844 nvme0n1 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.844 20:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:36.409 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:36.410 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:36.410 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:36.667 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:36.925 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:36.925 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:36.925 20:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:37.183 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:38.116 request: 00:17:38.116 { 00:17:38.116 "name": "nvme0", 00:17:38.116 "dhchap_key": "key1", 00:17:38.116 "dhchap_ctrlr_key": "key3", 00:17:38.116 "method": "bdev_nvme_set_keys", 00:17:38.116 "req_id": 1 00:17:38.116 } 00:17:38.116 Got JSON-RPC error response 00:17:38.116 response: 00:17:38.116 { 00:17:38.116 "code": -13, 00:17:38.116 "message": "Permission denied" 00:17:38.116 } 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:38.116 20:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.380 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:38.380 20:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:39.758 20:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:39.758 20:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:39.758 20:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.758 20:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:39.758 20:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:40.691 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:40.691 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:40.691 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.948 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:40.949 20:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.848 nvme0n1 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:42.848 20:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.414 request: 00:17:43.414 { 00:17:43.414 "name": "nvme0", 00:17:43.414 "dhchap_key": "key2", 00:17:43.414 "dhchap_ctrlr_key": "key0", 00:17:43.414 "method": "bdev_nvme_set_keys", 00:17:43.414 "req_id": 1 00:17:43.414 } 00:17:43.414 Got JSON-RPC error response 00:17:43.414 response: 00:17:43.414 { 00:17:43.414 "code": -13, 00:17:43.414 "message": "Permission denied" 00:17:43.414 } 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:43.414 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.978 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:43.978 20:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:44.911 20:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:44.911 20:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:44.911 20:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.168 20:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:45.168 20:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:46.100 20:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:46.100 20:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:46.100 20:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3957753 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3957753 ']' 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3957753 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3957753 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3957753' 00:17:46.396 killing process with pid 3957753 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3957753 00:17:46.396 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3957753 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.961 rmmod nvme_tcp 00:17:46.961 rmmod nvme_fabrics 00:17:46.961 rmmod nvme_keyring 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3981297 ']' 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3981297 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3981297 ']' 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3981297 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3981297 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3981297' 00:17:46.961 killing process with pid 3981297 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3981297 00:17:46.961 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3981297 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.219 20:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Hpq /tmp/spdk.key-sha256.aEs /tmp/spdk.key-sha384.i9Z /tmp/spdk.key-sha512.2fO /tmp/spdk.key-sha512.kTT /tmp/spdk.key-sha384.5DG /tmp/spdk.key-sha256.IJN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:49.160 00:17:49.160 real 3m44.158s 00:17:49.160 user 8m44.564s 00:17:49.160 sys 0m27.785s 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.160 ************************************ 00:17:49.160 END TEST nvmf_auth_target 00:17:49.160 ************************************ 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.160 20:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.443 ************************************ 00:17:49.443 START TEST nvmf_bdevio_no_huge 00:17:49.443 ************************************ 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:49.443 * Looking for test storage... 00:17:49.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.443 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.444 --rc genhtml_branch_coverage=1 00:17:49.444 --rc genhtml_function_coverage=1 00:17:49.444 --rc genhtml_legend=1 00:17:49.444 --rc geninfo_all_blocks=1 00:17:49.444 --rc geninfo_unexecuted_blocks=1 00:17:49.444 00:17:49.444 ' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.444 --rc genhtml_branch_coverage=1 00:17:49.444 --rc genhtml_function_coverage=1 00:17:49.444 --rc genhtml_legend=1 00:17:49.444 --rc geninfo_all_blocks=1 00:17:49.444 --rc geninfo_unexecuted_blocks=1 00:17:49.444 00:17:49.444 ' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.444 --rc genhtml_branch_coverage=1 00:17:49.444 --rc genhtml_function_coverage=1 00:17:49.444 --rc genhtml_legend=1 00:17:49.444 --rc geninfo_all_blocks=1 00:17:49.444 --rc geninfo_unexecuted_blocks=1 00:17:49.444 00:17:49.444 ' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.444 --rc genhtml_branch_coverage=1 00:17:49.444 --rc genhtml_function_coverage=1 00:17:49.444 --rc genhtml_legend=1 00:17:49.444 --rc geninfo_all_blocks=1 00:17:49.444 --rc geninfo_unexecuted_blocks=1 00:17:49.444 00:17:49.444 ' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.444 20:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.979 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.980 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.980 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:51.980 00:17:51.980 --- 10.0.0.2 ping statistics --- 00:17:51.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.980 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:17:51.980 00:17:51.980 --- 10.0.0.1 ping statistics --- 00:17:51.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.980 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.980 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3986991 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3986991 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3986991 ']' 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 [2024-11-26 20:58:42.513392] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:17:51.981 [2024-11-26 20:58:42.513473] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:51.981 [2024-11-26 20:58:42.594513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.981 [2024-11-26 20:58:42.650021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.981 [2024-11-26 20:58:42.650088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.981 [2024-11-26 20:58:42.650117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.981 [2024-11-26 20:58:42.650128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.981 [2024-11-26 20:58:42.650137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.981 [2024-11-26 20:58:42.651142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:51.981 [2024-11-26 20:58:42.651221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:51.981 [2024-11-26 20:58:42.651280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:51.981 [2024-11-26 20:58:42.651283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 [2024-11-26 20:58:42.811767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 Malloc0 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 [2024-11-26 20:58:42.850268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:51.981 { 00:17:51.981 "params": { 00:17:51.981 "name": "Nvme$subsystem", 00:17:51.981 "trtype": "$TEST_TRANSPORT", 00:17:51.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.981 "adrfam": "ipv4", 00:17:51.981 "trsvcid": "$NVMF_PORT", 00:17:51.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.981 "hdgst": ${hdgst:-false}, 00:17:51.981 "ddgst": ${ddgst:-false} 00:17:51.981 }, 00:17:51.981 "method": "bdev_nvme_attach_controller" 00:17:51.981 } 00:17:51.981 EOF 00:17:51.981 )") 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:51.981 20:58:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:51.981 "params": { 00:17:51.981 "name": "Nvme1", 00:17:51.981 "trtype": "tcp", 00:17:51.981 "traddr": "10.0.0.2", 00:17:51.981 "adrfam": "ipv4", 00:17:51.981 "trsvcid": "4420", 00:17:51.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.981 "hdgst": false, 00:17:51.981 "ddgst": false 00:17:51.981 }, 00:17:51.981 "method": "bdev_nvme_attach_controller" 00:17:51.981 }' 00:17:51.981 [2024-11-26 20:58:42.902270] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:17:51.981 [2024-11-26 20:58:42.902336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3987017 ] 00:17:52.240 [2024-11-26 20:58:42.977108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.240 [2024-11-26 20:58:43.040711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.240 [2024-11-26 20:58:43.040738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.240 [2024-11-26 20:58:43.040742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.498 I/O targets: 00:17:52.498 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:52.498 00:17:52.498 00:17:52.498 CUnit - A unit testing framework for C - Version 2.1-3 00:17:52.498 http://cunit.sourceforge.net/ 00:17:52.498 00:17:52.498 00:17:52.498 Suite: bdevio tests on: Nvme1n1 00:17:52.498 Test: blockdev write read block ...passed 00:17:52.498 Test: blockdev write zeroes read block ...passed 00:17:52.498 Test: blockdev write zeroes read no split ...passed 00:17:52.498 Test: blockdev write zeroes read split ...passed 00:17:52.498 Test: blockdev write zeroes read split partial ...passed 00:17:52.498 Test: blockdev reset ...[2024-11-26 20:58:43.430395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:52.498 [2024-11-26 20:58:43.430518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153a6a0 (9): Bad file descriptor 00:17:52.756 [2024-11-26 20:58:43.484345] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:52.756 passed 00:17:52.756 Test: blockdev write read 8 blocks ...passed 00:17:52.756 Test: blockdev write read size > 128k ...passed 00:17:52.756 Test: blockdev write read invalid size ...passed 00:17:52.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:52.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:52.756 Test: blockdev write read max offset ...passed 00:17:52.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:52.756 Test: blockdev writev readv 8 blocks ...passed 00:17:52.756 Test: blockdev writev readv 30 x 1block ...passed 00:17:53.014 Test: blockdev writev readv block ...passed 00:17:53.014 Test: blockdev writev readv size > 128k ...passed 00:17:53.014 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:53.014 Test: blockdev comparev and writev ...[2024-11-26 20:58:43.703113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.703152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.703177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.703195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.703594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.703619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.703641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.703657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.704057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.704081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.704103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.704120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.704536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.704557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:53.014 [2024-11-26 20:58:43.704573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:53.014 passed 00:17:53.014 Test: blockdev nvme passthru rw ...passed 00:17:53.014 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:58:43.789047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.014 [2024-11-26 20:58:43.789075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.789232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.014 [2024-11-26 20:58:43.789255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.789409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.014 [2024-11-26 20:58:43.789432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:53.014 [2024-11-26 20:58:43.789588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.014 [2024-11-26 20:58:43.789611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:53.014 passed 00:17:53.014 Test: blockdev nvme admin passthru ...passed 00:17:53.014 Test: blockdev copy ...passed 00:17:53.014 00:17:53.014 Run Summary: Type Total Ran Passed Failed Inactive 00:17:53.014 suites 1 1 n/a 0 0 00:17:53.014 tests 23 23 23 0 0 00:17:53.014 asserts 152 152 152 0 n/a 00:17:53.014 00:17:53.014 Elapsed time = 1.156 seconds 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.273 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.273 rmmod nvme_tcp 00:17:53.531 rmmod nvme_fabrics 00:17:53.531 rmmod nvme_keyring 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3986991 ']' 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3986991 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3986991 ']' 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3986991 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3986991 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3986991' 00:17:53.531 killing process with pid 3986991 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3986991 00:17:53.531 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3986991 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.790 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.049 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.049 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:54.049 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.049 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.049 20:58:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.954 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:55.954 00:17:55.954 real 0m6.687s 00:17:55.954 user 0m10.910s 00:17:55.954 sys 0m2.616s 00:17:55.954 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.954 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.954 ************************************ 00:17:55.954 END TEST nvmf_bdevio_no_huge 00:17:55.954 ************************************ 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 ************************************ 00:17:55.955 START TEST nvmf_tls 00:17:55.955 ************************************ 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:55.955 * Looking for test storage... 00:17:55.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:55.955 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.214 --rc genhtml_branch_coverage=1 00:17:56.214 --rc genhtml_function_coverage=1 00:17:56.214 --rc genhtml_legend=1 00:17:56.214 --rc geninfo_all_blocks=1 00:17:56.214 --rc geninfo_unexecuted_blocks=1 00:17:56.214 00:17:56.214 ' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.214 --rc genhtml_branch_coverage=1 00:17:56.214 --rc genhtml_function_coverage=1 00:17:56.214 --rc genhtml_legend=1 00:17:56.214 --rc geninfo_all_blocks=1 00:17:56.214 --rc geninfo_unexecuted_blocks=1 00:17:56.214 00:17:56.214 ' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.214 --rc genhtml_branch_coverage=1 00:17:56.214 --rc genhtml_function_coverage=1 00:17:56.214 --rc genhtml_legend=1 00:17:56.214 --rc geninfo_all_blocks=1 00:17:56.214 --rc geninfo_unexecuted_blocks=1 00:17:56.214 00:17:56.214 ' 00:17:56.214 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.214 --rc genhtml_branch_coverage=1 00:17:56.214 --rc genhtml_function_coverage=1 00:17:56.214 --rc genhtml_legend=1 00:17:56.214 --rc geninfo_all_blocks=1 00:17:56.214 --rc geninfo_unexecuted_blocks=1 00:17:56.214 00:17:56.214 ' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:56.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:56.215 20:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.117 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.118 20:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.118 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.118 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.118 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:58.118 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:58.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:17:58.387 00:17:58.387 --- 10.0.0.2 ping statistics --- 00:17:58.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.387 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:17:58.387 00:17:58.387 --- 10.0.0.1 ping statistics --- 00:17:58.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.387 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3989133 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:58.387 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3989133 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3989133 ']' 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.388 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.388 [2024-11-26 20:58:49.175233] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:17:58.388 [2024-11-26 20:58:49.175308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.388 [2024-11-26 20:58:49.261093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.649 [2024-11-26 20:58:49.323033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.649 [2024-11-26 20:58:49.323085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.649 [2024-11-26 20:58:49.323101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.649 [2024-11-26 20:58:49.323114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.649 [2024-11-26 20:58:49.323125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.649 [2024-11-26 20:58:49.323770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:58.649 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:58.907 true 00:17:58.907 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:58.907 20:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:59.214 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:59.214 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:59.214 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:59.472 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.473 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:59.731 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:59.731 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:59.731 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:59.989 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.989 20:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:00.247 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:00.247 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:00.247 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.248 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:00.506 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:00.506 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:00.506 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:01.072 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.072 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:01.072 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:01.072 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:01.072 20:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:01.330 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.330 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.cGNyNYf3DZ 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.iQNONcyT4r 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cGNyNYf3DZ 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.iQNONcyT4r 00:18:01.896 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.154 20:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:02.413 20:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.cGNyNYf3DZ 00:18:02.413 20:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cGNyNYf3DZ 00:18:02.413 20:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.671 [2024-11-26 20:58:53.525851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.671 20:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.929 20:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.187 [2024-11-26 20:58:54.071340] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.187 [2024-11-26 20:58:54.071655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.187 20:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.446 malloc0 00:18:03.446 20:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:04.012 20:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cGNyNYf3DZ 00:18:04.271 20:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.529 20:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cGNyNYf3DZ 00:18:14.499 Initializing NVMe Controllers 00:18:14.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:14.499 Initialization complete. Launching workers. 00:18:14.499 ======================================================== 00:18:14.499 Latency(us) 00:18:14.499 Device Information : IOPS MiB/s Average min max 00:18:14.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7736.29 30.22 8275.44 1369.53 12673.27 00:18:14.499 ======================================================== 00:18:14.499 Total : 7736.29 30.22 8275.44 1369.53 12673.27 00:18:14.499 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cGNyNYf3DZ 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cGNyNYf3DZ 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3991119 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3991119 /var/tmp/bdevperf.sock 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3991119 ']' 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.499 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.758 [2024-11-26 20:59:05.447922] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:14.758 [2024-11-26 20:59:05.447998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991119 ] 00:18:14.758 [2024-11-26 20:59:05.514953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.758 [2024-11-26 20:59:05.571728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.758 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.758 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.758 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cGNyNYf3DZ 00:18:15.324 20:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.582 [2024-11-26 20:59:06.263106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.582 TLSTESTn1 00:18:15.582 20:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:15.582 Running I/O for 10 seconds... 00:18:17.887 3306.00 IOPS, 12.91 MiB/s [2024-11-26T19:59:09.758Z] 3480.00 IOPS, 13.59 MiB/s [2024-11-26T19:59:10.692Z] 3516.33 IOPS, 13.74 MiB/s [2024-11-26T19:59:11.624Z] 3551.00 IOPS, 13.87 MiB/s [2024-11-26T19:59:12.557Z] 3560.40 IOPS, 13.91 MiB/s [2024-11-26T19:59:13.573Z] 3577.17 IOPS, 13.97 MiB/s [2024-11-26T19:59:14.506Z] 3567.14 IOPS, 13.93 MiB/s [2024-11-26T19:59:15.878Z] 3570.50 IOPS, 13.95 MiB/s [2024-11-26T19:59:16.810Z] 3577.33 IOPS, 13.97 MiB/s [2024-11-26T19:59:16.810Z] 3587.70 IOPS, 14.01 MiB/s 00:18:25.872 Latency(us) 00:18:25.872 [2024-11-26T19:59:16.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.872 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.872 Verification LBA range: start 0x0 length 0x2000 00:18:25.872 TLSTESTn1 : 10.03 3591.31 14.03 0.00 0.00 35576.09 8980.86 36700.16 00:18:25.872 [2024-11-26T19:59:16.810Z] =================================================================================================================== 00:18:25.872 [2024-11-26T19:59:16.810Z] Total : 3591.31 14.03 0.00 0.00 35576.09 8980.86 36700.16 00:18:25.872 { 00:18:25.872 "results": [ 00:18:25.872 { 00:18:25.872 "job": "TLSTESTn1", 00:18:25.872 "core_mask": "0x4", 00:18:25.872 "workload": "verify", 00:18:25.872 "status": "finished", 00:18:25.872 "verify_range": { 00:18:25.872 "start": 0, 00:18:25.872 "length": 8192 00:18:25.872 }, 00:18:25.872 "queue_depth": 128, 00:18:25.872 "io_size": 4096, 00:18:25.872 "runtime": 10.025316, 00:18:25.872 "iops": 3591.3082440493645, 00:18:25.872 "mibps": 14.02854782831783, 00:18:25.872 "io_failed": 0, 00:18:25.872 "io_timeout": 0, 00:18:25.872 "avg_latency_us": 35576.08830887103, 00:18:25.872 "min_latency_us": 8980.85925925926, 00:18:25.872 "max_latency_us": 36700.16 00:18:25.872 } 00:18:25.872 ], 00:18:25.872 "core_count": 1 00:18:25.872 } 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3991119 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3991119 ']' 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3991119 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3991119 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3991119' 00:18:25.872 killing process with pid 3991119 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3991119 00:18:25.872 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.872 00:18:25.872 Latency(us) 00:18:25.872 [2024-11-26T19:59:16.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.872 [2024-11-26T19:59:16.810Z] =================================================================================================================== 00:18:25.872 [2024-11-26T19:59:16.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3991119 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQNONcyT4r 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQNONcyT4r 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iQNONcyT4r 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iQNONcyT4r 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3992438 00:18:25.872 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3992438 /var/tmp/bdevperf.sock 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3992438 ']' 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.873 20:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.131 [2024-11-26 20:59:16.849647] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:26.131 [2024-11-26 20:59:16.849746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992438 ] 00:18:26.131 [2024-11-26 20:59:16.914164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.131 [2024-11-26 20:59:16.969414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.389 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.389 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.389 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iQNONcyT4r 00:18:26.647 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.906 [2024-11-26 20:59:17.595282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.906 [2024-11-26 20:59:17.601973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.906 [2024-11-26 20:59:17.602567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd72f0 (107): Transport endpoint is not connected 00:18:26.906 [2024-11-26 20:59:17.603556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd72f0 (9): Bad file descriptor 00:18:26.906 [2024-11-26 20:59:17.604556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:26.906 [2024-11-26 20:59:17.604575] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.906 [2024-11-26 20:59:17.604603] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:26.906 [2024-11-26 20:59:17.604617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:26.906 request: 00:18:26.906 { 00:18:26.906 "name": "TLSTEST", 00:18:26.906 "trtype": "tcp", 00:18:26.906 "traddr": "10.0.0.2", 00:18:26.906 "adrfam": "ipv4", 00:18:26.906 "trsvcid": "4420", 00:18:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.906 "prchk_reftag": false, 00:18:26.906 "prchk_guard": false, 00:18:26.906 "hdgst": false, 00:18:26.906 "ddgst": false, 00:18:26.906 "psk": "key0", 00:18:26.906 "allow_unrecognized_csi": false, 00:18:26.906 "method": "bdev_nvme_attach_controller", 00:18:26.906 "req_id": 1 00:18:26.906 } 00:18:26.906 Got JSON-RPC error response 00:18:26.906 response: 00:18:26.906 { 00:18:26.906 "code": -5, 00:18:26.906 "message": "Input/output error" 00:18:26.906 } 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3992438 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3992438 ']' 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3992438 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3992438 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3992438' 00:18:26.906 killing process with pid 3992438 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3992438 00:18:26.906 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.906 00:18:26.906 Latency(us) 00:18:26.906 [2024-11-26T19:59:17.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.906 [2024-11-26T19:59:17.844Z] =================================================================================================================== 00:18:26.906 [2024-11-26T19:59:17.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.906 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3992438 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cGNyNYf3DZ 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cGNyNYf3DZ 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cGNyNYf3DZ 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:27.164 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cGNyNYf3DZ 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3992503 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3992503 /var/tmp/bdevperf.sock 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3992503 ']' 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.165 20:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.165 [2024-11-26 20:59:17.899628] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:27.165 [2024-11-26 20:59:17.899739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992503 ] 00:18:27.165 [2024-11-26 20:59:17.970459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.165 [2024-11-26 20:59:18.026560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.423 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.423 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.423 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cGNyNYf3DZ 00:18:27.681 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:27.939 [2024-11-26 20:59:18.667113] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.940 [2024-11-26 20:59:18.674568] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:27.940 [2024-11-26 20:59:18.674600] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:27.940 [2024-11-26 20:59:18.674653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:27.940 [2024-11-26 20:59:18.675251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154f2f0 (107): Transport endpoint is not connected 00:18:27.940 [2024-11-26 20:59:18.676239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154f2f0 (9): Bad file descriptor 00:18:27.940 [2024-11-26 20:59:18.677239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:27.940 [2024-11-26 20:59:18.677259] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:27.940 [2024-11-26 20:59:18.677287] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:27.940 [2024-11-26 20:59:18.677302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:27.940 request: 00:18:27.940 { 00:18:27.940 "name": "TLSTEST", 00:18:27.940 "trtype": "tcp", 00:18:27.940 "traddr": "10.0.0.2", 00:18:27.940 "adrfam": "ipv4", 00:18:27.940 "trsvcid": "4420", 00:18:27.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:27.940 "prchk_reftag": false, 00:18:27.940 "prchk_guard": false, 00:18:27.940 "hdgst": false, 00:18:27.940 "ddgst": false, 00:18:27.940 "psk": "key0", 00:18:27.940 "allow_unrecognized_csi": false, 00:18:27.940 "method": "bdev_nvme_attach_controller", 00:18:27.940 "req_id": 1 00:18:27.940 } 00:18:27.940 Got JSON-RPC error response 00:18:27.940 response: 00:18:27.940 { 00:18:27.940 "code": -5, 00:18:27.940 "message": "Input/output error" 00:18:27.940 } 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3992503 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3992503 ']' 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3992503 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3992503 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3992503' 00:18:27.940 killing process with pid 3992503 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3992503 00:18:27.940 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.940 00:18:27.940 Latency(us) 00:18:27.940 [2024-11-26T19:59:18.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.940 [2024-11-26T19:59:18.878Z] =================================================================================================================== 00:18:27.940 [2024-11-26T19:59:18.878Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.940 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3992503 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cGNyNYf3DZ 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cGNyNYf3DZ 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cGNyNYf3DZ 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cGNyNYf3DZ 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.198 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3992615 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3992615 /var/tmp/bdevperf.sock 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3992615 ']' 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.199 20:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 [2024-11-26 20:59:19.004541] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:28.199 [2024-11-26 20:59:19.004633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992615 ] 00:18:28.199 [2024-11-26 20:59:19.070517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.199 [2024-11-26 20:59:19.127124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.460 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.460 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.460 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cGNyNYf3DZ 00:18:28.718 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.976 [2024-11-26 20:59:19.769655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.976 [2024-11-26 20:59:19.777501] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.976 [2024-11-26 20:59:19.777534] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.976 [2024-11-26 20:59:19.777590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:28.976 [2024-11-26 20:59:19.777868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e42f0 (107): Transport endpoint is not connected 00:18:28.976 [2024-11-26 20:59:19.778858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e42f0 (9): Bad file descriptor 00:18:28.976 [2024-11-26 20:59:19.779858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:28.976 [2024-11-26 20:59:19.779879] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:28.976 [2024-11-26 20:59:19.779894] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:28.976 [2024-11-26 20:59:19.779910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:28.976 request: 00:18:28.976 { 00:18:28.976 "name": "TLSTEST", 00:18:28.976 "trtype": "tcp", 00:18:28.976 "traddr": "10.0.0.2", 00:18:28.976 "adrfam": "ipv4", 00:18:28.976 "trsvcid": "4420", 00:18:28.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:28.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.976 "prchk_reftag": false, 00:18:28.976 "prchk_guard": false, 00:18:28.976 "hdgst": false, 00:18:28.976 "ddgst": false, 00:18:28.976 "psk": "key0", 00:18:28.976 "allow_unrecognized_csi": false, 00:18:28.976 "method": "bdev_nvme_attach_controller", 00:18:28.976 "req_id": 1 00:18:28.976 } 00:18:28.976 Got JSON-RPC error response 00:18:28.976 response: 00:18:28.976 { 00:18:28.976 "code": -5, 00:18:28.976 "message": "Input/output error" 00:18:28.976 } 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3992615 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3992615 ']' 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3992615 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3992615 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3992615' 00:18:28.976 killing process with pid 3992615 00:18:28.976 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3992615 00:18:28.976 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.976 00:18:28.976 Latency(us) 00:18:28.976 [2024-11-26T19:59:19.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.976 [2024-11-26T19:59:19.915Z] =================================================================================================================== 00:18:28.977 [2024-11-26T19:59:19.915Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.977 20:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3992615 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:29.235 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3992750 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3992750 /var/tmp/bdevperf.sock 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3992750 ']' 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.236 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.236 [2024-11-26 20:59:20.120797] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:29.236 [2024-11-26 20:59:20.120897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992750 ] 00:18:29.494 [2024-11-26 20:59:20.193857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.494 [2024-11-26 20:59:20.251045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.494 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.494 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.494 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:29.752 [2024-11-26 20:59:20.633192] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:29.752 [2024-11-26 20:59:20.633231] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:29.752 request: 00:18:29.752 { 00:18:29.752 "name": "key0", 00:18:29.752 "path": "", 00:18:29.752 "method": "keyring_file_add_key", 00:18:29.752 "req_id": 1 00:18:29.752 } 00:18:29.752 Got JSON-RPC error response 00:18:29.752 response: 00:18:29.752 { 00:18:29.752 "code": -1, 00:18:29.752 "message": "Operation not permitted" 00:18:29.752 } 00:18:29.752 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.010 [2024-11-26 20:59:20.902043] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.010 [2024-11-26 20:59:20.902102] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:30.010 request: 00:18:30.010 { 00:18:30.010 "name": "TLSTEST", 00:18:30.010 "trtype": "tcp", 00:18:30.010 "traddr": "10.0.0.2", 00:18:30.010 "adrfam": "ipv4", 00:18:30.010 "trsvcid": "4420", 00:18:30.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.011 "prchk_reftag": false, 00:18:30.011 "prchk_guard": false, 00:18:30.011 "hdgst": false, 00:18:30.011 "ddgst": false, 00:18:30.011 "psk": "key0", 00:18:30.011 "allow_unrecognized_csi": false, 00:18:30.011 "method": "bdev_nvme_attach_controller", 00:18:30.011 "req_id": 1 00:18:30.011 } 00:18:30.011 Got JSON-RPC error response 00:18:30.011 response: 00:18:30.011 { 00:18:30.011 "code": -126, 00:18:30.011 "message": "Required key not available" 00:18:30.011 } 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3992750 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3992750 ']' 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3992750 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.011 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3992750 00:18:30.269 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.269 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.269 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3992750' 00:18:30.269 killing process with pid 3992750 00:18:30.269 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3992750 00:18:30.269 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.269 00:18:30.269 Latency(us) 00:18:30.269 [2024-11-26T19:59:21.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.269 [2024-11-26T19:59:21.207Z] =================================================================================================================== 00:18:30.269 [2024-11-26T19:59:21.207Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.269 20:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3992750 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3989133 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3989133 ']' 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3989133 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3989133 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.269 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3989133' 00:18:30.270 killing process with pid 3989133 00:18:30.270 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3989133 00:18:30.270 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3989133 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.2CObFda196 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:30.528 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.2CObFda196 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3993025 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3993025 00:18:30.786 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3993025 ']' 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.787 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.787 [2024-11-26 20:59:21.523797] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:30.787 [2024-11-26 20:59:21.523900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.787 [2024-11-26 20:59:21.597575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.787 [2024-11-26 20:59:21.654776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.787 [2024-11-26 20:59:21.654863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.787 [2024-11-26 20:59:21.654891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.787 [2024-11-26 20:59:21.654902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.787 [2024-11-26 20:59:21.654911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.787 [2024-11-26 20:59:21.655527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.2CObFda196 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2CObFda196 00:18:31.046 20:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.304 [2024-11-26 20:59:22.107605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.304 20:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.563 20:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.821 [2024-11-26 20:59:22.737297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.821 [2024-11-26 20:59:22.737581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.080 20:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.080 malloc0 00:18:32.338 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.596 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:32.855 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.112 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2CObFda196 00:18:33.112 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.112 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.112 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.112 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2CObFda196 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3993314 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3993314 /var/tmp/bdevperf.sock 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3993314 ']' 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.113 20:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.113 [2024-11-26 20:59:24.010521] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:33.113 [2024-11-26 20:59:24.010596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993314 ] 00:18:33.371 [2024-11-26 20:59:24.075891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.371 [2024-11-26 20:59:24.135063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.371 20:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.371 20:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.371 20:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:33.629 20:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.886 [2024-11-26 20:59:24.776420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.144 TLSTESTn1 00:18:34.144 20:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:34.144 Running I/O for 10 seconds... 00:18:36.451 3257.00 IOPS, 12.72 MiB/s [2024-11-26T19:59:28.322Z] 3351.50 IOPS, 13.09 MiB/s [2024-11-26T19:59:29.257Z] 3371.33 IOPS, 13.17 MiB/s [2024-11-26T19:59:30.189Z] 3403.00 IOPS, 13.29 MiB/s [2024-11-26T19:59:31.122Z] 3386.60 IOPS, 13.23 MiB/s [2024-11-26T19:59:32.054Z] 3403.00 IOPS, 13.29 MiB/s [2024-11-26T19:59:33.426Z] 3406.29 IOPS, 13.31 MiB/s [2024-11-26T19:59:34.359Z] 3408.00 IOPS, 13.31 MiB/s [2024-11-26T19:59:35.294Z] 3415.78 IOPS, 13.34 MiB/s [2024-11-26T19:59:35.294Z] 3419.50 IOPS, 13.36 MiB/s 00:18:44.356 Latency(us) 00:18:44.356 [2024-11-26T19:59:35.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.356 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.356 Verification LBA range: start 0x0 length 0x2000 00:18:44.356 TLSTESTn1 : 10.02 3425.13 13.38 0.00 0.00 37307.25 7524.50 34758.35 00:18:44.356 [2024-11-26T19:59:35.294Z] =================================================================================================================== 00:18:44.356 [2024-11-26T19:59:35.294Z] Total : 3425.13 13.38 0.00 0.00 37307.25 7524.50 34758.35 00:18:44.356 { 00:18:44.356 "results": [ 00:18:44.356 { 00:18:44.356 "job": "TLSTESTn1", 00:18:44.356 "core_mask": "0x4", 00:18:44.356 "workload": "verify", 00:18:44.356 "status": "finished", 00:18:44.356 "verify_range": { 00:18:44.356 "start": 0, 00:18:44.356 "length": 8192 00:18:44.356 }, 00:18:44.356 "queue_depth": 128, 00:18:44.356 "io_size": 4096, 00:18:44.356 "runtime": 10.020646, 00:18:44.356 "iops": 3425.1284797407275, 00:18:44.356 "mibps": 13.379408123987217, 00:18:44.356 "io_failed": 0, 00:18:44.356 "io_timeout": 0, 00:18:44.356 "avg_latency_us": 37307.247087647054, 00:18:44.356 "min_latency_us": 7524.503703703704, 00:18:44.356 "max_latency_us": 34758.35259259259 00:18:44.356 } 00:18:44.356 ], 00:18:44.356 "core_count": 1 00:18:44.356 } 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3993314 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3993314 ']' 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3993314 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993314 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993314' 00:18:44.356 killing process with pid 3993314 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3993314 00:18:44.356 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.356 00:18:44.356 Latency(us) 00:18:44.356 [2024-11-26T19:59:35.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.356 [2024-11-26T19:59:35.294Z] =================================================================================================================== 00:18:44.356 [2024-11-26T19:59:35.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.356 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3993314 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.2CObFda196 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2CObFda196 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2CObFda196 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2CObFda196 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2CObFda196 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3994630 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3994630 /var/tmp/bdevperf.sock 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3994630 ']' 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.615 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.615 [2024-11-26 20:59:35.369578] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:44.615 [2024-11-26 20:59:35.369655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994630 ] 00:18:44.615 [2024-11-26 20:59:35.435982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.615 [2024-11-26 20:59:35.491573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.873 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.873 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.873 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:45.131 [2024-11-26 20:59:35.851012] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2CObFda196': 0100666 00:18:45.131 [2024-11-26 20:59:35.851056] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:45.131 request: 00:18:45.131 { 00:18:45.131 "name": "key0", 00:18:45.131 "path": "/tmp/tmp.2CObFda196", 00:18:45.132 "method": "keyring_file_add_key", 00:18:45.132 "req_id": 1 00:18:45.132 } 00:18:45.132 Got JSON-RPC error response 00:18:45.132 response: 00:18:45.132 { 00:18:45.132 "code": -1, 00:18:45.132 "message": "Operation not permitted" 00:18:45.132 } 00:18:45.132 20:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.390 [2024-11-26 20:59:36.127877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.390 [2024-11-26 20:59:36.127926] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:45.390 request: 00:18:45.390 { 00:18:45.390 "name": "TLSTEST", 00:18:45.390 "trtype": "tcp", 00:18:45.390 "traddr": "10.0.0.2", 00:18:45.390 "adrfam": "ipv4", 00:18:45.390 "trsvcid": "4420", 00:18:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.390 "prchk_reftag": false, 00:18:45.390 "prchk_guard": false, 00:18:45.390 "hdgst": false, 00:18:45.390 "ddgst": false, 00:18:45.390 "psk": "key0", 00:18:45.390 "allow_unrecognized_csi": false, 00:18:45.390 "method": "bdev_nvme_attach_controller", 00:18:45.390 "req_id": 1 00:18:45.390 } 00:18:45.390 Got JSON-RPC error response 00:18:45.390 response: 00:18:45.390 { 00:18:45.390 "code": -126, 00:18:45.390 "message": "Required key not available" 00:18:45.390 } 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3994630 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3994630 ']' 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3994630 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994630 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994630' 00:18:45.390 killing process with pid 3994630 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3994630 00:18:45.390 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.390 00:18:45.390 Latency(us) 00:18:45.390 [2024-11-26T19:59:36.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.390 [2024-11-26T19:59:36.328Z] =================================================================================================================== 00:18:45.390 [2024-11-26T19:59:36.328Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.390 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3994630 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3993025 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3993025 ']' 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3993025 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993025 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993025' 00:18:45.649 killing process with pid 3993025 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3993025 00:18:45.649 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3993025 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3994782 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3994782 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3994782 ']' 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.909 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 [2024-11-26 20:59:36.727416] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:45.909 [2024-11-26 20:59:36.727528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.909 [2024-11-26 20:59:36.800940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.167 [2024-11-26 20:59:36.859100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.167 [2024-11-26 20:59:36.859174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.167 [2024-11-26 20:59:36.859202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.167 [2024-11-26 20:59:36.859213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.167 [2024-11-26 20:59:36.859222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.167 [2024-11-26 20:59:36.859865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.167 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.167 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.167 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.167 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.167 20:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.2CObFda196 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2CObFda196 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.2CObFda196 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2CObFda196 00:18:46.167 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.425 [2024-11-26 20:59:37.310707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.425 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.992 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:46.992 [2024-11-26 20:59:37.876233] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.992 [2024-11-26 20:59:37.876549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.992 20:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.250 malloc0 00:18:47.250 20:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.861 20:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:48.137 [2024-11-26 20:59:38.790452] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2CObFda196': 0100666 00:18:48.137 [2024-11-26 20:59:38.790493] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:48.137 request: 00:18:48.137 { 00:18:48.137 "name": "key0", 00:18:48.137 "path": "/tmp/tmp.2CObFda196", 00:18:48.137 "method": "keyring_file_add_key", 00:18:48.137 "req_id": 1 00:18:48.137 } 00:18:48.137 Got JSON-RPC error response 00:18:48.137 response: 00:18:48.137 { 00:18:48.137 "code": -1, 00:18:48.137 "message": "Operation not permitted" 00:18:48.137 } 00:18:48.137 20:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.137 [2024-11-26 20:59:39.067236] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:48.137 [2024-11-26 20:59:39.067320] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:48.137 request: 00:18:48.137 { 00:18:48.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.137 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.137 "psk": "key0", 00:18:48.137 "method": "nvmf_subsystem_add_host", 00:18:48.137 "req_id": 1 00:18:48.137 } 00:18:48.137 Got JSON-RPC error response 00:18:48.137 response: 00:18:48.137 { 00:18:48.137 "code": -32603, 00:18:48.137 "message": "Internal error" 00:18:48.137 } 00:18:48.395 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3994782 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3994782 ']' 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3994782 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994782 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994782' 00:18:48.396 killing process with pid 3994782 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3994782 00:18:48.396 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3994782 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.2CObFda196 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3995088 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3995088 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3995088 ']' 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.654 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.654 [2024-11-26 20:59:39.427300] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:48.654 [2024-11-26 20:59:39.427378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.654 [2024-11-26 20:59:39.502965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.654 [2024-11-26 20:59:39.559552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.654 [2024-11-26 20:59:39.559613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.654 [2024-11-26 20:59:39.559641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.654 [2024-11-26 20:59:39.559653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.654 [2024-11-26 20:59:39.559662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.654 [2024-11-26 20:59:39.560343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.2CObFda196 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2CObFda196 00:18:48.912 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.171 [2024-11-26 20:59:39.965743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.171 20:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.429 20:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.687 [2024-11-26 20:59:40.623531] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.688 [2024-11-26 20:59:40.623842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.946 20:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:50.204 malloc0 00:18:50.204 20:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.461 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:50.717 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3995403 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3995403 /var/tmp/bdevperf.sock 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3995403 ']' 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.975 20:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.975 [2024-11-26 20:59:41.795006] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:50.976 [2024-11-26 20:59:41.795096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995403 ] 00:18:50.976 [2024-11-26 20:59:41.861854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.234 [2024-11-26 20:59:41.919851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.234 20:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.234 20:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.234 20:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:18:51.492 20:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.749 [2024-11-26 20:59:42.555297] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.750 TLSTESTn1 00:18:51.750 20:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:52.315 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:52.315 "subsystems": [ 00:18:52.315 { 00:18:52.315 "subsystem": "keyring", 00:18:52.315 "config": [ 00:18:52.315 { 00:18:52.315 "method": "keyring_file_add_key", 00:18:52.315 "params": { 00:18:52.315 "name": "key0", 00:18:52.315 "path": "/tmp/tmp.2CObFda196" 00:18:52.315 } 00:18:52.315 } 00:18:52.315 ] 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "subsystem": "iobuf", 00:18:52.315 "config": [ 00:18:52.315 { 00:18:52.315 "method": "iobuf_set_options", 00:18:52.315 "params": { 00:18:52.315 "small_pool_count": 8192, 00:18:52.315 "large_pool_count": 1024, 00:18:52.315 "small_bufsize": 8192, 00:18:52.315 "large_bufsize": 135168, 00:18:52.315 "enable_numa": false 00:18:52.315 } 00:18:52.315 } 00:18:52.315 ] 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "subsystem": "sock", 00:18:52.315 "config": [ 00:18:52.315 { 00:18:52.315 "method": "sock_set_default_impl", 00:18:52.315 "params": { 00:18:52.315 "impl_name": "posix" 00:18:52.315 } 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "method": "sock_impl_set_options", 00:18:52.315 "params": { 00:18:52.315 "impl_name": "ssl", 00:18:52.315 "recv_buf_size": 4096, 00:18:52.315 "send_buf_size": 4096, 00:18:52.315 "enable_recv_pipe": true, 00:18:52.315 "enable_quickack": false, 00:18:52.315 "enable_placement_id": 0, 00:18:52.315 "enable_zerocopy_send_server": true, 00:18:52.315 "enable_zerocopy_send_client": false, 00:18:52.315 "zerocopy_threshold": 0, 00:18:52.315 "tls_version": 0, 00:18:52.315 "enable_ktls": false 00:18:52.315 } 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "method": "sock_impl_set_options", 00:18:52.315 "params": { 00:18:52.315 "impl_name": "posix", 00:18:52.315 "recv_buf_size": 2097152, 00:18:52.315 "send_buf_size": 2097152, 00:18:52.315 "enable_recv_pipe": true, 00:18:52.315 "enable_quickack": false, 00:18:52.315 "enable_placement_id": 0, 00:18:52.315 "enable_zerocopy_send_server": true, 00:18:52.315 "enable_zerocopy_send_client": false, 00:18:52.315 "zerocopy_threshold": 0, 00:18:52.315 "tls_version": 0, 00:18:52.315 "enable_ktls": false 00:18:52.315 } 00:18:52.315 } 00:18:52.315 ] 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "subsystem": "vmd", 00:18:52.315 "config": [] 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "subsystem": "accel", 00:18:52.315 "config": [ 00:18:52.315 { 00:18:52.315 "method": "accel_set_options", 00:18:52.315 "params": { 00:18:52.315 "small_cache_size": 128, 00:18:52.315 "large_cache_size": 16, 00:18:52.315 "task_count": 2048, 00:18:52.315 "sequence_count": 2048, 00:18:52.315 "buf_count": 2048 00:18:52.315 } 00:18:52.315 } 00:18:52.315 ] 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "subsystem": "bdev", 00:18:52.315 "config": [ 00:18:52.315 { 00:18:52.315 "method": "bdev_set_options", 00:18:52.315 "params": { 00:18:52.315 "bdev_io_pool_size": 65535, 00:18:52.315 "bdev_io_cache_size": 256, 00:18:52.315 "bdev_auto_examine": true, 00:18:52.315 "iobuf_small_cache_size": 128, 00:18:52.315 "iobuf_large_cache_size": 16 00:18:52.315 } 00:18:52.315 }, 00:18:52.315 { 00:18:52.315 "method": "bdev_raid_set_options", 00:18:52.315 "params": { 00:18:52.315 "process_window_size_kb": 1024, 00:18:52.316 "process_max_bandwidth_mb_sec": 0 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "bdev_iscsi_set_options", 00:18:52.316 "params": { 00:18:52.316 "timeout_sec": 30 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "bdev_nvme_set_options", 00:18:52.316 "params": { 00:18:52.316 "action_on_timeout": "none", 00:18:52.316 "timeout_us": 0, 00:18:52.316 "timeout_admin_us": 0, 00:18:52.316 "keep_alive_timeout_ms": 10000, 00:18:52.316 "arbitration_burst": 0, 00:18:52.316 "low_priority_weight": 0, 00:18:52.316 "medium_priority_weight": 0, 00:18:52.316 "high_priority_weight": 0, 00:18:52.316 "nvme_adminq_poll_period_us": 10000, 00:18:52.316 "nvme_ioq_poll_period_us": 0, 00:18:52.316 "io_queue_requests": 0, 00:18:52.316 "delay_cmd_submit": true, 00:18:52.316 "transport_retry_count": 4, 00:18:52.316 "bdev_retry_count": 3, 00:18:52.316 "transport_ack_timeout": 0, 00:18:52.316 "ctrlr_loss_timeout_sec": 0, 00:18:52.316 "reconnect_delay_sec": 0, 00:18:52.316 "fast_io_fail_timeout_sec": 0, 00:18:52.316 "disable_auto_failback": false, 00:18:52.316 "generate_uuids": false, 00:18:52.316 "transport_tos": 0, 00:18:52.316 "nvme_error_stat": false, 00:18:52.316 "rdma_srq_size": 0, 00:18:52.316 "io_path_stat": false, 00:18:52.316 "allow_accel_sequence": false, 00:18:52.316 "rdma_max_cq_size": 0, 00:18:52.316 "rdma_cm_event_timeout_ms": 0, 00:18:52.316 "dhchap_digests": [ 00:18:52.316 "sha256", 00:18:52.316 "sha384", 00:18:52.316 "sha512" 00:18:52.316 ], 00:18:52.316 "dhchap_dhgroups": [ 00:18:52.316 "null", 00:18:52.316 "ffdhe2048", 00:18:52.316 "ffdhe3072", 00:18:52.316 "ffdhe4096", 00:18:52.316 "ffdhe6144", 00:18:52.316 "ffdhe8192" 00:18:52.316 ] 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "bdev_nvme_set_hotplug", 00:18:52.316 "params": { 00:18:52.316 "period_us": 100000, 00:18:52.316 "enable": false 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "bdev_malloc_create", 00:18:52.316 "params": { 00:18:52.316 "name": "malloc0", 00:18:52.316 "num_blocks": 8192, 00:18:52.316 "block_size": 4096, 00:18:52.316 "physical_block_size": 4096, 00:18:52.316 "uuid": "938c2a62-8cea-4adb-a259-81513b810e25", 00:18:52.316 "optimal_io_boundary": 0, 00:18:52.316 "md_size": 0, 00:18:52.316 "dif_type": 0, 00:18:52.316 "dif_is_head_of_md": false, 00:18:52.316 "dif_pi_format": 0 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "bdev_wait_for_examine" 00:18:52.316 } 00:18:52.316 ] 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "subsystem": "nbd", 00:18:52.316 "config": [] 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "subsystem": "scheduler", 00:18:52.316 "config": [ 00:18:52.316 { 00:18:52.316 "method": "framework_set_scheduler", 00:18:52.316 "params": { 00:18:52.316 "name": "static" 00:18:52.316 } 00:18:52.316 } 00:18:52.316 ] 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "subsystem": "nvmf", 00:18:52.316 "config": [ 00:18:52.316 { 00:18:52.316 "method": "nvmf_set_config", 00:18:52.316 "params": { 00:18:52.316 "discovery_filter": "match_any", 00:18:52.316 "admin_cmd_passthru": { 00:18:52.316 "identify_ctrlr": false 00:18:52.316 }, 00:18:52.316 "dhchap_digests": [ 00:18:52.316 "sha256", 00:18:52.316 "sha384", 00:18:52.316 "sha512" 00:18:52.316 ], 00:18:52.316 "dhchap_dhgroups": [ 00:18:52.316 "null", 00:18:52.316 "ffdhe2048", 00:18:52.316 "ffdhe3072", 00:18:52.316 "ffdhe4096", 00:18:52.316 "ffdhe6144", 00:18:52.316 "ffdhe8192" 00:18:52.316 ] 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_set_max_subsystems", 00:18:52.316 "params": { 00:18:52.316 "max_subsystems": 1024 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_set_crdt", 00:18:52.316 "params": { 00:18:52.316 "crdt1": 0, 00:18:52.316 "crdt2": 0, 00:18:52.316 "crdt3": 0 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_create_transport", 00:18:52.316 "params": { 00:18:52.316 "trtype": "TCP", 00:18:52.316 "max_queue_depth": 128, 00:18:52.316 "max_io_qpairs_per_ctrlr": 127, 00:18:52.316 "in_capsule_data_size": 4096, 00:18:52.316 "max_io_size": 131072, 00:18:52.316 "io_unit_size": 131072, 00:18:52.316 "max_aq_depth": 128, 00:18:52.316 "num_shared_buffers": 511, 00:18:52.316 "buf_cache_size": 4294967295, 00:18:52.316 "dif_insert_or_strip": false, 00:18:52.316 "zcopy": false, 00:18:52.316 "c2h_success": false, 00:18:52.316 "sock_priority": 0, 00:18:52.316 "abort_timeout_sec": 1, 00:18:52.316 "ack_timeout": 0, 00:18:52.316 "data_wr_pool_size": 0 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_create_subsystem", 00:18:52.316 "params": { 00:18:52.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.316 "allow_any_host": false, 00:18:52.316 "serial_number": "SPDK00000000000001", 00:18:52.316 "model_number": "SPDK bdev Controller", 00:18:52.316 "max_namespaces": 10, 00:18:52.316 "min_cntlid": 1, 00:18:52.316 "max_cntlid": 65519, 00:18:52.316 "ana_reporting": false 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_subsystem_add_host", 00:18:52.316 "params": { 00:18:52.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.316 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.316 "psk": "key0" 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_subsystem_add_ns", 00:18:52.316 "params": { 00:18:52.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.316 "namespace": { 00:18:52.316 "nsid": 1, 00:18:52.316 "bdev_name": "malloc0", 00:18:52.316 "nguid": "938C2A628CEA4ADBA25981513B810E25", 00:18:52.316 "uuid": "938c2a62-8cea-4adb-a259-81513b810e25", 00:18:52.316 "no_auto_visible": false 00:18:52.316 } 00:18:52.316 } 00:18:52.316 }, 00:18:52.316 { 00:18:52.316 "method": "nvmf_subsystem_add_listener", 00:18:52.316 "params": { 00:18:52.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.316 "listen_address": { 00:18:52.316 "trtype": "TCP", 00:18:52.316 "adrfam": "IPv4", 00:18:52.316 "traddr": "10.0.0.2", 00:18:52.316 "trsvcid": "4420" 00:18:52.316 }, 00:18:52.316 "secure_channel": true 00:18:52.316 } 00:18:52.316 } 00:18:52.316 ] 00:18:52.316 } 00:18:52.316 ] 00:18:52.316 }' 00:18:52.316 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.575 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:52.575 "subsystems": [ 00:18:52.575 { 00:18:52.575 "subsystem": "keyring", 00:18:52.575 "config": [ 00:18:52.575 { 00:18:52.575 "method": "keyring_file_add_key", 00:18:52.575 "params": { 00:18:52.575 "name": "key0", 00:18:52.575 "path": "/tmp/tmp.2CObFda196" 00:18:52.575 } 00:18:52.575 } 00:18:52.575 ] 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "subsystem": "iobuf", 00:18:52.575 "config": [ 00:18:52.575 { 00:18:52.575 "method": "iobuf_set_options", 00:18:52.575 "params": { 00:18:52.575 "small_pool_count": 8192, 00:18:52.575 "large_pool_count": 1024, 00:18:52.575 "small_bufsize": 8192, 00:18:52.575 "large_bufsize": 135168, 00:18:52.575 "enable_numa": false 00:18:52.575 } 00:18:52.575 } 00:18:52.575 ] 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "subsystem": "sock", 00:18:52.575 "config": [ 00:18:52.575 { 00:18:52.575 "method": "sock_set_default_impl", 00:18:52.575 "params": { 00:18:52.575 "impl_name": "posix" 00:18:52.575 } 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "method": "sock_impl_set_options", 00:18:52.575 "params": { 00:18:52.575 "impl_name": "ssl", 00:18:52.575 "recv_buf_size": 4096, 00:18:52.575 "send_buf_size": 4096, 00:18:52.575 "enable_recv_pipe": true, 00:18:52.575 "enable_quickack": false, 00:18:52.575 "enable_placement_id": 0, 00:18:52.575 "enable_zerocopy_send_server": true, 00:18:52.575 "enable_zerocopy_send_client": false, 00:18:52.575 "zerocopy_threshold": 0, 00:18:52.575 "tls_version": 0, 00:18:52.575 "enable_ktls": false 00:18:52.575 } 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "method": "sock_impl_set_options", 00:18:52.575 "params": { 00:18:52.575 "impl_name": "posix", 00:18:52.575 "recv_buf_size": 2097152, 00:18:52.575 "send_buf_size": 2097152, 00:18:52.575 "enable_recv_pipe": true, 00:18:52.575 "enable_quickack": false, 00:18:52.575 "enable_placement_id": 0, 00:18:52.575 "enable_zerocopy_send_server": true, 00:18:52.575 "enable_zerocopy_send_client": false, 00:18:52.575 "zerocopy_threshold": 0, 00:18:52.575 "tls_version": 0, 00:18:52.575 "enable_ktls": false 00:18:52.575 } 00:18:52.575 } 00:18:52.575 ] 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "subsystem": "vmd", 00:18:52.575 "config": [] 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "subsystem": "accel", 00:18:52.575 "config": [ 00:18:52.575 { 00:18:52.575 "method": "accel_set_options", 00:18:52.575 "params": { 00:18:52.575 "small_cache_size": 128, 00:18:52.575 "large_cache_size": 16, 00:18:52.575 "task_count": 2048, 00:18:52.575 "sequence_count": 2048, 00:18:52.575 "buf_count": 2048 00:18:52.575 } 00:18:52.575 } 00:18:52.575 ] 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "subsystem": "bdev", 00:18:52.575 "config": [ 00:18:52.575 { 00:18:52.575 "method": "bdev_set_options", 00:18:52.575 "params": { 00:18:52.575 "bdev_io_pool_size": 65535, 00:18:52.575 "bdev_io_cache_size": 256, 00:18:52.575 "bdev_auto_examine": true, 00:18:52.575 "iobuf_small_cache_size": 128, 00:18:52.575 "iobuf_large_cache_size": 16 00:18:52.575 } 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "method": "bdev_raid_set_options", 00:18:52.575 "params": { 00:18:52.575 "process_window_size_kb": 1024, 00:18:52.575 "process_max_bandwidth_mb_sec": 0 00:18:52.575 } 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "method": "bdev_iscsi_set_options", 00:18:52.575 "params": { 00:18:52.575 "timeout_sec": 30 00:18:52.575 } 00:18:52.575 }, 00:18:52.575 { 00:18:52.575 "method": "bdev_nvme_set_options", 00:18:52.575 "params": { 00:18:52.575 "action_on_timeout": "none", 00:18:52.575 "timeout_us": 0, 00:18:52.576 "timeout_admin_us": 0, 00:18:52.576 "keep_alive_timeout_ms": 10000, 00:18:52.576 "arbitration_burst": 0, 00:18:52.576 "low_priority_weight": 0, 00:18:52.576 "medium_priority_weight": 0, 00:18:52.576 "high_priority_weight": 0, 00:18:52.576 "nvme_adminq_poll_period_us": 10000, 00:18:52.576 "nvme_ioq_poll_period_us": 0, 00:18:52.576 "io_queue_requests": 512, 00:18:52.576 "delay_cmd_submit": true, 00:18:52.576 "transport_retry_count": 4, 00:18:52.576 "bdev_retry_count": 3, 00:18:52.576 "transport_ack_timeout": 0, 00:18:52.576 "ctrlr_loss_timeout_sec": 0, 00:18:52.576 "reconnect_delay_sec": 0, 00:18:52.576 "fast_io_fail_timeout_sec": 0, 00:18:52.576 "disable_auto_failback": false, 00:18:52.576 "generate_uuids": false, 00:18:52.576 "transport_tos": 0, 00:18:52.576 "nvme_error_stat": false, 00:18:52.576 "rdma_srq_size": 0, 00:18:52.576 "io_path_stat": false, 00:18:52.576 "allow_accel_sequence": false, 00:18:52.576 "rdma_max_cq_size": 0, 00:18:52.576 "rdma_cm_event_timeout_ms": 0, 00:18:52.576 "dhchap_digests": [ 00:18:52.576 "sha256", 00:18:52.576 "sha384", 00:18:52.576 "sha512" 00:18:52.576 ], 00:18:52.576 "dhchap_dhgroups": [ 00:18:52.576 "null", 00:18:52.576 "ffdhe2048", 00:18:52.576 "ffdhe3072", 00:18:52.576 "ffdhe4096", 00:18:52.576 "ffdhe6144", 00:18:52.576 "ffdhe8192" 00:18:52.576 ] 00:18:52.576 } 00:18:52.576 }, 00:18:52.576 { 00:18:52.576 "method": "bdev_nvme_attach_controller", 00:18:52.576 "params": { 00:18:52.576 "name": "TLSTEST", 00:18:52.576 "trtype": "TCP", 00:18:52.576 "adrfam": "IPv4", 00:18:52.576 "traddr": "10.0.0.2", 00:18:52.576 "trsvcid": "4420", 00:18:52.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.576 "prchk_reftag": false, 00:18:52.576 "prchk_guard": false, 00:18:52.576 "ctrlr_loss_timeout_sec": 0, 00:18:52.576 "reconnect_delay_sec": 0, 00:18:52.576 "fast_io_fail_timeout_sec": 0, 00:18:52.576 "psk": "key0", 00:18:52.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.576 "hdgst": false, 00:18:52.576 "ddgst": false, 00:18:52.576 "multipath": "multipath" 00:18:52.576 } 00:18:52.576 }, 00:18:52.576 { 00:18:52.576 "method": "bdev_nvme_set_hotplug", 00:18:52.576 "params": { 00:18:52.576 "period_us": 100000, 00:18:52.576 "enable": false 00:18:52.576 } 00:18:52.576 }, 00:18:52.576 { 00:18:52.576 "method": "bdev_wait_for_examine" 00:18:52.576 } 00:18:52.576 ] 00:18:52.576 }, 00:18:52.576 { 00:18:52.576 "subsystem": "nbd", 00:18:52.576 "config": [] 00:18:52.576 } 00:18:52.576 ] 00:18:52.576 }' 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3995403 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3995403 ']' 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3995403 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3995403 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3995403' 00:18:52.576 killing process with pid 3995403 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3995403 00:18:52.576 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.576 00:18:52.576 Latency(us) 00:18:52.576 [2024-11-26T19:59:43.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.576 [2024-11-26T19:59:43.514Z] =================================================================================================================== 00:18:52.576 [2024-11-26T19:59:43.514Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.576 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3995403 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3995088 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3995088 ']' 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3995088 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3995088 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3995088' 00:18:52.834 killing process with pid 3995088 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3995088 00:18:52.834 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3995088 00:18:53.093 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:53.093 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.093 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.093 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:53.093 "subsystems": [ 00:18:53.093 { 00:18:53.093 "subsystem": "keyring", 00:18:53.093 "config": [ 00:18:53.093 { 00:18:53.093 "method": "keyring_file_add_key", 00:18:53.093 "params": { 00:18:53.093 "name": "key0", 00:18:53.093 "path": "/tmp/tmp.2CObFda196" 00:18:53.093 } 00:18:53.093 } 00:18:53.093 ] 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "subsystem": "iobuf", 00:18:53.093 "config": [ 00:18:53.093 { 00:18:53.093 "method": "iobuf_set_options", 00:18:53.093 "params": { 00:18:53.093 "small_pool_count": 8192, 00:18:53.093 "large_pool_count": 1024, 00:18:53.093 "small_bufsize": 8192, 00:18:53.093 "large_bufsize": 135168, 00:18:53.093 "enable_numa": false 00:18:53.093 } 00:18:53.093 } 00:18:53.093 ] 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "subsystem": "sock", 00:18:53.093 "config": [ 00:18:53.093 { 00:18:53.093 "method": "sock_set_default_impl", 00:18:53.093 "params": { 00:18:53.093 "impl_name": "posix" 00:18:53.093 } 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "method": "sock_impl_set_options", 00:18:53.093 "params": { 00:18:53.093 "impl_name": "ssl", 00:18:53.093 "recv_buf_size": 4096, 00:18:53.093 "send_buf_size": 4096, 00:18:53.093 "enable_recv_pipe": true, 00:18:53.093 "enable_quickack": false, 00:18:53.093 "enable_placement_id": 0, 00:18:53.093 "enable_zerocopy_send_server": true, 00:18:53.093 "enable_zerocopy_send_client": false, 00:18:53.093 "zerocopy_threshold": 0, 00:18:53.093 "tls_version": 0, 00:18:53.093 "enable_ktls": false 00:18:53.093 } 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "method": "sock_impl_set_options", 00:18:53.093 "params": { 00:18:53.093 "impl_name": "posix", 00:18:53.093 "recv_buf_size": 2097152, 00:18:53.093 "send_buf_size": 2097152, 00:18:53.093 "enable_recv_pipe": true, 00:18:53.093 "enable_quickack": false, 00:18:53.093 "enable_placement_id": 0, 00:18:53.093 "enable_zerocopy_send_server": true, 00:18:53.093 "enable_zerocopy_send_client": false, 00:18:53.093 "zerocopy_threshold": 0, 00:18:53.093 "tls_version": 0, 00:18:53.093 "enable_ktls": false 00:18:53.093 } 00:18:53.093 } 00:18:53.093 ] 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "subsystem": "vmd", 00:18:53.093 "config": [] 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "subsystem": "accel", 00:18:53.093 "config": [ 00:18:53.093 { 00:18:53.093 "method": "accel_set_options", 00:18:53.093 "params": { 00:18:53.093 "small_cache_size": 128, 00:18:53.093 "large_cache_size": 16, 00:18:53.093 "task_count": 2048, 00:18:53.093 "sequence_count": 2048, 00:18:53.093 "buf_count": 2048 00:18:53.093 } 00:18:53.093 } 00:18:53.093 ] 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "subsystem": "bdev", 00:18:53.093 "config": [ 00:18:53.093 { 00:18:53.093 "method": "bdev_set_options", 00:18:53.093 "params": { 00:18:53.093 "bdev_io_pool_size": 65535, 00:18:53.093 "bdev_io_cache_size": 256, 00:18:53.093 "bdev_auto_examine": true, 00:18:53.093 "iobuf_small_cache_size": 128, 00:18:53.093 "iobuf_large_cache_size": 16 00:18:53.093 } 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "method": "bdev_raid_set_options", 00:18:53.093 "params": { 00:18:53.093 "process_window_size_kb": 1024, 00:18:53.093 "process_max_bandwidth_mb_sec": 0 00:18:53.093 } 00:18:53.093 }, 00:18:53.093 { 00:18:53.093 "method": "bdev_iscsi_set_options", 00:18:53.093 "params": { 00:18:53.093 "timeout_sec": 30 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "bdev_nvme_set_options", 00:18:53.094 "params": { 00:18:53.094 "action_on_timeout": "none", 00:18:53.094 "timeout_us": 0, 00:18:53.094 "timeout_admin_us": 0, 00:18:53.094 "keep_alive_timeout_ms": 10000, 00:18:53.094 "arbitration_burst": 0, 00:18:53.094 "low_priority_weight": 0, 00:18:53.094 "medium_priority_weight": 0, 00:18:53.094 "high_priority_weight": 0, 00:18:53.094 "nvme_adminq_poll_period_us": 10000, 00:18:53.094 "nvme_ioq_poll_period_us": 0, 00:18:53.094 "io_queue_requests": 0, 00:18:53.094 "delay_cmd_submit": true, 00:18:53.094 "transport_retry_count": 4, 00:18:53.094 "bdev_retry_count": 3, 00:18:53.094 "transport_ack_timeout": 0, 00:18:53.094 "ctrlr_loss_timeout_sec": 0, 00:18:53.094 "reconnect_delay_sec": 0, 00:18:53.094 "fast_io_fail_timeout_sec": 0, 00:18:53.094 "disable_auto_failback": false, 00:18:53.094 "generate_uuids": false, 00:18:53.094 "transport_tos": 0, 00:18:53.094 "nvme_error_stat": false, 00:18:53.094 "rdma_srq_size": 0, 00:18:53.094 "io_path_stat": false, 00:18:53.094 "allow_accel_sequence": false, 00:18:53.094 "rdma_max_cq_size": 0, 00:18:53.094 "rdma_cm_event_timeout_ms": 0, 00:18:53.094 "dhchap_digests": [ 00:18:53.094 "sha256", 00:18:53.094 "sha384", 00:18:53.094 "sha512" 00:18:53.094 ], 00:18:53.094 "dhchap_dhgroups": [ 00:18:53.094 "null", 00:18:53.094 "ffdhe2048", 00:18:53.094 "ffdhe3072", 00:18:53.094 "ffdhe4096", 00:18:53.094 "ffdhe6144", 00:18:53.094 "ffdhe8192" 00:18:53.094 ] 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "bdev_nvme_set_hotplug", 00:18:53.094 "params": { 00:18:53.094 "period_us": 100000, 00:18:53.094 "enable": false 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "bdev_malloc_create", 00:18:53.094 "params": { 00:18:53.094 "name": "malloc0", 00:18:53.094 "num_blocks": 8192, 00:18:53.094 "block_size": 4096, 00:18:53.094 "physical_block_size": 4096, 00:18:53.094 "uuid": "938c2a62-8cea-4adb-a259-81513b810e25", 00:18:53.094 "optimal_io_boundary": 0, 00:18:53.094 "md_size": 0, 00:18:53.094 "dif_type": 0, 00:18:53.094 "dif_is_head_of_md": false, 00:18:53.094 "dif_pi_format": 0 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "bdev_wait_for_examine" 00:18:53.094 } 00:18:53.094 ] 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "subsystem": "nbd", 00:18:53.094 "config": [] 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "subsystem": "scheduler", 00:18:53.094 "config": [ 00:18:53.094 { 00:18:53.094 "method": "framework_set_scheduler", 00:18:53.094 "params": { 00:18:53.094 "name": "static" 00:18:53.094 } 00:18:53.094 } 00:18:53.094 ] 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "subsystem": "nvmf", 00:18:53.094 "config": [ 00:18:53.094 { 00:18:53.094 "method": "nvmf_set_config", 00:18:53.094 "params": { 00:18:53.094 "discovery_filter": "match_any", 00:18:53.094 "admin_cmd_passthru": { 00:18:53.094 "identify_ctrlr": false 00:18:53.094 }, 00:18:53.094 "dhchap_digests": [ 00:18:53.094 "sha256", 00:18:53.094 "sha384", 00:18:53.094 "sha512" 00:18:53.094 ], 00:18:53.094 "dhchap_dhgroups": [ 00:18:53.094 "null", 00:18:53.094 "ffdhe2048", 00:18:53.094 "ffdhe3072", 00:18:53.094 "ffdhe4096", 00:18:53.094 "ffdhe6144", 00:18:53.094 "ffdhe8192" 00:18:53.094 ] 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_set_max_subsystems", 00:18:53.094 "params": { 00:18:53.094 "max_subsystems": 1024 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_set_crdt", 00:18:53.094 "params": { 00:18:53.094 "crdt1": 0, 00:18:53.094 "crdt2": 0, 00:18:53.094 "crdt3": 0 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_create_transport", 00:18:53.094 "params": { 00:18:53.094 "trtype": "TCP", 00:18:53.094 "max_queue_depth": 128, 00:18:53.094 "max_io_qpairs_per_ctrlr": 127, 00:18:53.094 "in_capsule_data_size": 4096, 00:18:53.094 "max_io_size": 131072, 00:18:53.094 "io_unit_size": 131072, 00:18:53.094 "max_aq_depth": 128, 00:18:53.094 "num_shared_buffers": 511, 00:18:53.094 "buf_cache_size": 4294967295, 00:18:53.094 "dif_insert_or_strip": false, 00:18:53.094 "zcopy": false, 00:18:53.094 "c2h_success": false, 00:18:53.094 "sock_priority": 0, 00:18:53.094 "abort_timeout_sec": 1, 00:18:53.094 "ack_timeout": 0, 00:18:53.094 "data_wr_pool_size": 0 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_create_subsystem", 00:18:53.094 "params": { 00:18:53.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.094 "allow_any_host": false, 00:18:53.094 "serial_number": "SPDK00000000000001", 00:18:53.094 "model_number": "SPDK bdev Controller", 00:18:53.094 "max_namespaces": 10, 00:18:53.094 "min_cntlid": 1, 00:18:53.094 "max_cntlid": 65519, 00:18:53.094 "ana_reporting": false 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_subsystem_add_host", 00:18:53.094 "params": { 00:18:53.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.094 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.094 "psk": "key0" 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_subsystem_add_ns", 00:18:53.094 "params": { 00:18:53.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.094 "namespace": { 00:18:53.094 "nsid": 1, 00:18:53.094 "bdev_name": "malloc0", 00:18:53.094 "nguid": "938C2A628CEA4ADBA25981513B810E25", 00:18:53.094 "uuid": "938c2a62-8cea-4adb-a259-81513b810e25", 00:18:53.094 "no_auto_visible": false 00:18:53.094 } 00:18:53.094 } 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "method": "nvmf_subsystem_add_listener", 00:18:53.094 "params": { 00:18:53.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.094 "listen_address": { 00:18:53.094 "trtype": "TCP", 00:18:53.094 "adrfam": "IPv4", 00:18:53.094 "traddr": "10.0.0.2", 00:18:53.094 "trsvcid": "4420" 00:18:53.094 }, 00:18:53.094 "secure_channel": true 00:18:53.094 } 00:18:53.094 } 00:18:53.094 ] 00:18:53.094 } 00:18:53.094 ] 00:18:53.094 }' 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3995657 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3995657 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3995657 ']' 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.094 20:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.094 [2024-11-26 20:59:43.954415] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:53.094 [2024-11-26 20:59:43.954500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.353 [2024-11-26 20:59:44.039158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.353 [2024-11-26 20:59:44.102949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.353 [2024-11-26 20:59:44.103029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.353 [2024-11-26 20:59:44.103046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.353 [2024-11-26 20:59:44.103059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.353 [2024-11-26 20:59:44.103070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.353 [2024-11-26 20:59:44.103783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.611 [2024-11-26 20:59:44.348250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.611 [2024-11-26 20:59:44.380263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.611 [2024-11-26 20:59:44.380535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3995812 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3995812 /var/tmp/bdevperf.sock 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3995812 ']' 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.177 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:54.177 "subsystems": [ 00:18:54.177 { 00:18:54.177 "subsystem": "keyring", 00:18:54.177 "config": [ 00:18:54.177 { 00:18:54.177 "method": "keyring_file_add_key", 00:18:54.177 "params": { 00:18:54.178 "name": "key0", 00:18:54.178 "path": "/tmp/tmp.2CObFda196" 00:18:54.178 } 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "iobuf", 00:18:54.178 "config": [ 00:18:54.178 { 00:18:54.178 "method": "iobuf_set_options", 00:18:54.178 "params": { 00:18:54.178 "small_pool_count": 8192, 00:18:54.178 "large_pool_count": 1024, 00:18:54.178 "small_bufsize": 8192, 00:18:54.178 "large_bufsize": 135168, 00:18:54.178 "enable_numa": false 00:18:54.178 } 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "sock", 00:18:54.178 "config": [ 00:18:54.178 { 00:18:54.178 "method": "sock_set_default_impl", 00:18:54.178 "params": { 00:18:54.178 "impl_name": "posix" 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "sock_impl_set_options", 00:18:54.178 "params": { 00:18:54.178 "impl_name": "ssl", 00:18:54.178 "recv_buf_size": 4096, 00:18:54.178 "send_buf_size": 4096, 00:18:54.178 "enable_recv_pipe": true, 00:18:54.178 "enable_quickack": false, 00:18:54.178 "enable_placement_id": 0, 00:18:54.178 "enable_zerocopy_send_server": true, 00:18:54.178 "enable_zerocopy_send_client": false, 00:18:54.178 "zerocopy_threshold": 0, 00:18:54.178 "tls_version": 0, 00:18:54.178 "enable_ktls": false 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "sock_impl_set_options", 00:18:54.178 "params": { 00:18:54.178 "impl_name": "posix", 00:18:54.178 "recv_buf_size": 2097152, 00:18:54.178 "send_buf_size": 2097152, 00:18:54.178 "enable_recv_pipe": true, 00:18:54.178 "enable_quickack": false, 00:18:54.178 "enable_placement_id": 0, 00:18:54.178 "enable_zerocopy_send_server": true, 00:18:54.178 "enable_zerocopy_send_client": false, 00:18:54.178 "zerocopy_threshold": 0, 00:18:54.178 "tls_version": 0, 00:18:54.178 "enable_ktls": false 00:18:54.178 } 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "vmd", 00:18:54.178 "config": [] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "accel", 00:18:54.178 "config": [ 00:18:54.178 { 00:18:54.178 "method": "accel_set_options", 00:18:54.178 "params": { 00:18:54.178 "small_cache_size": 128, 00:18:54.178 "large_cache_size": 16, 00:18:54.178 "task_count": 2048, 00:18:54.178 "sequence_count": 2048, 00:18:54.178 "buf_count": 2048 00:18:54.178 } 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "bdev", 00:18:54.178 "config": [ 00:18:54.178 { 00:18:54.178 "method": "bdev_set_options", 00:18:54.178 "params": { 00:18:54.178 "bdev_io_pool_size": 65535, 00:18:54.178 "bdev_io_cache_size": 256, 00:18:54.178 "bdev_auto_examine": true, 00:18:54.178 "iobuf_small_cache_size": 128, 00:18:54.178 "iobuf_large_cache_size": 16 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_raid_set_options", 00:18:54.178 "params": { 00:18:54.178 "process_window_size_kb": 1024, 00:18:54.178 "process_max_bandwidth_mb_sec": 0 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_iscsi_set_options", 00:18:54.178 "params": { 00:18:54.178 "timeout_sec": 30 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_nvme_set_options", 00:18:54.178 "params": { 00:18:54.178 "action_on_timeout": "none", 00:18:54.178 "timeout_us": 0, 00:18:54.178 "timeout_admin_us": 0, 00:18:54.178 "keep_alive_timeout_ms": 10000, 00:18:54.178 "arbitration_burst": 0, 00:18:54.178 "low_priority_weight": 0, 00:18:54.178 "medium_priority_weight": 0, 00:18:54.178 "high_priority_weight": 0, 00:18:54.178 "nvme_adminq_poll_period_us": 10000, 00:18:54.178 "nvme_ioq_poll_period_us": 0, 00:18:54.178 "io_queue_requests": 512, 00:18:54.178 "delay_cmd_submit": true, 00:18:54.178 "transport_retry_count": 4, 00:18:54.178 "bdev_retry_count": 3, 00:18:54.178 "transport_ack_timeout": 0, 00:18:54.178 "ctrlr_loss_timeout_sec": 0, 00:18:54.178 "reconnect_delay_sec": 0, 00:18:54.178 "fast_io_fail_timeout_sec": 0, 00:18:54.178 "disable_auto_failback": false, 00:18:54.178 "generate_uuids": false, 00:18:54.178 "transport_tos": 0, 00:18:54.178 "nvme_error_stat": false, 00:18:54.178 "rdma_srq_size": 0, 00:18:54.178 "io_path_stat": false, 00:18:54.178 "allow_accel_sequence": false, 00:18:54.178 "rdma_max_cq_size": 0, 00:18:54.178 "rdma_cm_event_timeout_ms": 0, 00:18:54.178 "dhchap_digests": [ 00:18:54.178 "sha256", 00:18:54.178 "sha384", 00:18:54.178 "sha512" 00:18:54.178 ], 00:18:54.178 "dhchap_dhgroups": [ 00:18:54.178 "null", 00:18:54.178 "ffdhe2048", 00:18:54.178 "ffdhe3072", 00:18:54.178 "ffdhe4096", 00:18:54.178 "ffdhe6144", 00:18:54.178 "ffdhe8192" 00:18:54.178 ] 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_nvme_attach_controller", 00:18:54.178 "params": { 00:18:54.178 "name": "TLSTEST", 00:18:54.178 "trtype": "TCP", 00:18:54.178 "adrfam": "IPv4", 00:18:54.178 "traddr": "10.0.0.2", 00:18:54.178 "trsvcid": "4420", 00:18:54.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.178 "prchk_reftag": false, 00:18:54.178 "prchk_guard": false, 00:18:54.178 "ctrlr_loss_timeout_sec": 0, 00:18:54.178 "reconnect_delay_sec": 0, 00:18:54.178 "fast_io_fail_timeout_sec": 0, 00:18:54.178 "psk": "key0", 00:18:54.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.178 "hdgst": false, 00:18:54.178 "ddgst": false, 00:18:54.178 "multipath": "multipath" 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_nvme_set_hotplug", 00:18:54.178 "params": { 00:18:54.178 "period_us": 100000, 00:18:54.178 "enable": false 00:18:54.178 } 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "method": "bdev_wait_for_examine" 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }, 00:18:54.178 { 00:18:54.178 "subsystem": "nbd", 00:18:54.178 "config": [] 00:18:54.178 } 00:18:54.178 ] 00:18:54.178 }' 00:18:54.178 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.178 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 [2024-11-26 20:59:45.128154] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:18:54.437 [2024-11-26 20:59:45.128225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995812 ] 00:18:54.437 [2024-11-26 20:59:45.195472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.437 [2024-11-26 20:59:45.252492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.695 [2024-11-26 20:59:45.430855] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.695 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.695 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.695 20:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:54.954 Running I/O for 10 seconds... 00:18:56.821 3068.00 IOPS, 11.98 MiB/s [2024-11-26T19:59:48.693Z] 3170.00 IOPS, 12.38 MiB/s [2024-11-26T19:59:50.066Z] 3077.33 IOPS, 12.02 MiB/s [2024-11-26T19:59:50.999Z] 3098.25 IOPS, 12.10 MiB/s [2024-11-26T19:59:51.934Z] 3108.40 IOPS, 12.14 MiB/s [2024-11-26T19:59:52.868Z] 3082.17 IOPS, 12.04 MiB/s [2024-11-26T19:59:53.810Z] 3008.86 IOPS, 11.75 MiB/s [2024-11-26T19:59:54.745Z] 2935.38 IOPS, 11.47 MiB/s [2024-11-26T19:59:56.119Z] 2901.67 IOPS, 11.33 MiB/s [2024-11-26T19:59:56.119Z] 2868.00 IOPS, 11.20 MiB/s 00:19:05.181 Latency(us) 00:19:05.181 [2024-11-26T19:59:56.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.181 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.181 Verification LBA range: start 0x0 length 0x2000 00:19:05.181 TLSTESTn1 : 10.03 2871.68 11.22 0.00 0.00 44490.22 6262.33 71846.87 00:19:05.181 [2024-11-26T19:59:56.119Z] =================================================================================================================== 00:19:05.181 [2024-11-26T19:59:56.119Z] Total : 2871.68 11.22 0.00 0.00 44490.22 6262.33 71846.87 00:19:05.181 { 00:19:05.181 "results": [ 00:19:05.181 { 00:19:05.181 "job": "TLSTESTn1", 00:19:05.181 "core_mask": "0x4", 00:19:05.181 "workload": "verify", 00:19:05.181 "status": "finished", 00:19:05.181 "verify_range": { 00:19:05.181 "start": 0, 00:19:05.181 "length": 8192 00:19:05.181 }, 00:19:05.181 "queue_depth": 128, 00:19:05.181 "io_size": 4096, 00:19:05.181 "runtime": 10.03141, 00:19:05.181 "iops": 2871.6800529536727, 00:19:05.181 "mibps": 11.217500206850284, 00:19:05.181 "io_failed": 0, 00:19:05.181 "io_timeout": 0, 00:19:05.181 "avg_latency_us": 44490.22391502065, 00:19:05.181 "min_latency_us": 6262.328888888889, 00:19:05.181 "max_latency_us": 71846.87407407408 00:19:05.181 } 00:19:05.181 ], 00:19:05.181 "core_count": 1 00:19:05.181 } 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3995812 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3995812 ']' 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3995812 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3995812 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3995812' 00:19:05.181 killing process with pid 3995812 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3995812 00:19:05.181 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.181 00:19:05.181 Latency(us) 00:19:05.181 [2024-11-26T19:59:56.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.181 [2024-11-26T19:59:56.119Z] =================================================================================================================== 00:19:05.181 [2024-11-26T19:59:56.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3995812 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3995657 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3995657 ']' 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3995657 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.181 20:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3995657 00:19:05.181 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.181 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.181 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3995657' 00:19:05.181 killing process with pid 3995657 00:19:05.181 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3995657 00:19:05.181 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3995657 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3997133 00:19:05.439 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3997133 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3997133 ']' 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.440 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.440 [2024-11-26 20:59:56.291158] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:05.440 [2024-11-26 20:59:56.291232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.440 [2024-11-26 20:59:56.363722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.698 [2024-11-26 20:59:56.422644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.698 [2024-11-26 20:59:56.422731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.698 [2024-11-26 20:59:56.422746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.698 [2024-11-26 20:59:56.422757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.698 [2024-11-26 20:59:56.422780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.698 [2024-11-26 20:59:56.423415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.2CObFda196 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2CObFda196 00:19:05.698 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.956 [2024-11-26 20:59:56.877549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.214 20:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.472 20:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:06.730 [2024-11-26 20:59:57.459118] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.730 [2024-11-26 20:59:57.459402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.730 20:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.988 malloc0 00:19:06.988 20:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.247 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:19:07.812 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3997420 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3997420 /var/tmp/bdevperf.sock 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3997420 ']' 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.071 20:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.071 [2024-11-26 20:59:58.855941] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:08.071 [2024-11-26 20:59:58.856030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997420 ] 00:19:08.071 [2024-11-26 20:59:58.930241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.071 [2024-11-26 20:59:58.993705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.330 20:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.330 20:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.330 20:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:19:08.587 20:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:08.845 [2024-11-26 20:59:59.624697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.845 nvme0n1 00:19:08.845 20:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.103 Running I/O for 1 seconds... 00:19:10.037 3118.00 IOPS, 12.18 MiB/s 00:19:10.037 Latency(us) 00:19:10.037 [2024-11-26T20:00:00.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.037 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:10.037 Verification LBA range: start 0x0 length 0x2000 00:19:10.037 nvme0n1 : 1.03 3155.83 12.33 0.00 0.00 40041.77 6747.78 45632.47 00:19:10.037 [2024-11-26T20:00:00.975Z] =================================================================================================================== 00:19:10.037 [2024-11-26T20:00:00.975Z] Total : 3155.83 12.33 0.00 0.00 40041.77 6747.78 45632.47 00:19:10.037 { 00:19:10.037 "results": [ 00:19:10.037 { 00:19:10.037 "job": "nvme0n1", 00:19:10.037 "core_mask": "0x2", 00:19:10.037 "workload": "verify", 00:19:10.037 "status": "finished", 00:19:10.037 "verify_range": { 00:19:10.037 "start": 0, 00:19:10.037 "length": 8192 00:19:10.037 }, 00:19:10.037 "queue_depth": 128, 00:19:10.037 "io_size": 4096, 00:19:10.037 "runtime": 1.028891, 00:19:10.037 "iops": 3155.825058242321, 00:19:10.037 "mibps": 12.327441633759067, 00:19:10.037 "io_failed": 0, 00:19:10.037 "io_timeout": 0, 00:19:10.037 "avg_latency_us": 40041.77163467132, 00:19:10.037 "min_latency_us": 6747.780740740741, 00:19:10.037 "max_latency_us": 45632.474074074074 00:19:10.037 } 00:19:10.037 ], 00:19:10.037 "core_count": 1 00:19:10.037 } 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3997420 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3997420 ']' 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3997420 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997420 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997420' 00:19:10.037 killing process with pid 3997420 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3997420 00:19:10.037 Received shutdown signal, test time was about 1.000000 seconds 00:19:10.037 00:19:10.037 Latency(us) 00:19:10.037 [2024-11-26T20:00:00.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.037 [2024-11-26T20:00:00.975Z] =================================================================================================================== 00:19:10.037 [2024-11-26T20:00:00.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.037 21:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3997420 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3997133 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3997133 ']' 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3997133 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997133 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997133' 00:19:10.295 killing process with pid 3997133 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3997133 00:19:10.295 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3997133 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3997906 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3997906 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3997906 ']' 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.553 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 [2024-11-26 21:00:01.466878] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:10.553 [2024-11-26 21:00:01.466976] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.811 [2024-11-26 21:00:01.552883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.811 [2024-11-26 21:00:01.613426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.811 [2024-11-26 21:00:01.613501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.811 [2024-11-26 21:00:01.613526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.811 [2024-11-26 21:00:01.613540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.811 [2024-11-26 21:00:01.613552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.811 [2024-11-26 21:00:01.614243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.811 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.811 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.811 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.811 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.811 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.069 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.069 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:11.069 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.069 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.069 [2024-11-26 21:00:01.769903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.069 malloc0 00:19:11.069 [2024-11-26 21:00:01.802562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.069 [2024-11-26 21:00:01.802903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.069 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3997950 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3997950 /var/tmp/bdevperf.sock 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3997950 ']' 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.070 21:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.070 [2024-11-26 21:00:01.882751] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:11.070 [2024-11-26 21:00:01.882830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997950 ] 00:19:11.070 [2024-11-26 21:00:01.959905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.328 [2024-11-26 21:00:02.023085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.328 21:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.328 21:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.328 21:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2CObFda196 00:19:11.586 21:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.845 [2024-11-26 21:00:02.676594] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.845 nvme0n1 00:19:11.845 21:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.103 Running I/O for 1 seconds... 00:19:13.078 3154.00 IOPS, 12.32 MiB/s 00:19:13.078 Latency(us) 00:19:13.078 [2024-11-26T20:00:04.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.078 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:13.078 Verification LBA range: start 0x0 length 0x2000 00:19:13.078 nvme0n1 : 1.04 3161.39 12.35 0.00 0.00 39822.71 6650.69 51263.72 00:19:13.078 [2024-11-26T20:00:04.016Z] =================================================================================================================== 00:19:13.078 [2024-11-26T20:00:04.016Z] Total : 3161.39 12.35 0.00 0.00 39822.71 6650.69 51263.72 00:19:13.078 { 00:19:13.078 "results": [ 00:19:13.078 { 00:19:13.078 "job": "nvme0n1", 00:19:13.078 "core_mask": "0x2", 00:19:13.078 "workload": "verify", 00:19:13.078 "status": "finished", 00:19:13.078 "verify_range": { 00:19:13.078 "start": 0, 00:19:13.078 "length": 8192 00:19:13.078 }, 00:19:13.078 "queue_depth": 128, 00:19:13.078 "io_size": 4096, 00:19:13.078 "runtime": 1.038468, 00:19:13.078 "iops": 3161.38773655038, 00:19:13.078 "mibps": 12.349170845899922, 00:19:13.078 "io_failed": 0, 00:19:13.078 "io_timeout": 0, 00:19:13.078 "avg_latency_us": 39822.711112013625, 00:19:13.078 "min_latency_us": 6650.69037037037, 00:19:13.078 "max_latency_us": 51263.71555555556 00:19:13.078 } 00:19:13.078 ], 00:19:13.078 "core_count": 1 00:19:13.078 } 00:19:13.078 21:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:13.078 21:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.078 21:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.364 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.364 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:13.364 "subsystems": [ 00:19:13.364 { 00:19:13.364 "subsystem": "keyring", 00:19:13.364 "config": [ 00:19:13.364 { 00:19:13.364 "method": "keyring_file_add_key", 00:19:13.364 "params": { 00:19:13.364 "name": "key0", 00:19:13.364 "path": "/tmp/tmp.2CObFda196" 00:19:13.364 } 00:19:13.364 } 00:19:13.364 ] 00:19:13.364 }, 00:19:13.364 { 00:19:13.364 "subsystem": "iobuf", 00:19:13.364 "config": [ 00:19:13.364 { 00:19:13.364 "method": "iobuf_set_options", 00:19:13.364 "params": { 00:19:13.364 "small_pool_count": 8192, 00:19:13.364 "large_pool_count": 1024, 00:19:13.364 "small_bufsize": 8192, 00:19:13.364 "large_bufsize": 135168, 00:19:13.364 "enable_numa": false 00:19:13.364 } 00:19:13.364 } 00:19:13.364 ] 00:19:13.364 }, 00:19:13.364 { 00:19:13.364 "subsystem": "sock", 00:19:13.364 "config": [ 00:19:13.364 { 00:19:13.364 "method": "sock_set_default_impl", 00:19:13.364 "params": { 00:19:13.364 "impl_name": "posix" 00:19:13.364 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "sock_impl_set_options", 00:19:13.365 "params": { 00:19:13.365 "impl_name": "ssl", 00:19:13.365 "recv_buf_size": 4096, 00:19:13.365 "send_buf_size": 4096, 00:19:13.365 "enable_recv_pipe": true, 00:19:13.365 "enable_quickack": false, 00:19:13.365 "enable_placement_id": 0, 00:19:13.365 "enable_zerocopy_send_server": true, 00:19:13.365 "enable_zerocopy_send_client": false, 00:19:13.365 "zerocopy_threshold": 0, 00:19:13.365 "tls_version": 0, 00:19:13.365 "enable_ktls": false 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "sock_impl_set_options", 00:19:13.365 "params": { 00:19:13.365 "impl_name": "posix", 00:19:13.365 "recv_buf_size": 2097152, 00:19:13.365 "send_buf_size": 2097152, 00:19:13.365 "enable_recv_pipe": true, 00:19:13.365 "enable_quickack": false, 00:19:13.365 "enable_placement_id": 0, 00:19:13.365 "enable_zerocopy_send_server": true, 00:19:13.365 "enable_zerocopy_send_client": false, 00:19:13.365 "zerocopy_threshold": 0, 00:19:13.365 "tls_version": 0, 00:19:13.365 "enable_ktls": false 00:19:13.365 } 00:19:13.365 } 00:19:13.365 ] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "vmd", 00:19:13.365 "config": [] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "accel", 00:19:13.365 "config": [ 00:19:13.365 { 00:19:13.365 "method": "accel_set_options", 00:19:13.365 "params": { 00:19:13.365 "small_cache_size": 128, 00:19:13.365 "large_cache_size": 16, 00:19:13.365 "task_count": 2048, 00:19:13.365 "sequence_count": 2048, 00:19:13.365 "buf_count": 2048 00:19:13.365 } 00:19:13.365 } 00:19:13.365 ] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "bdev", 00:19:13.365 "config": [ 00:19:13.365 { 00:19:13.365 "method": "bdev_set_options", 00:19:13.365 "params": { 00:19:13.365 "bdev_io_pool_size": 65535, 00:19:13.365 "bdev_io_cache_size": 256, 00:19:13.365 "bdev_auto_examine": true, 00:19:13.365 "iobuf_small_cache_size": 128, 00:19:13.365 "iobuf_large_cache_size": 16 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_raid_set_options", 00:19:13.365 "params": { 00:19:13.365 "process_window_size_kb": 1024, 00:19:13.365 "process_max_bandwidth_mb_sec": 0 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_iscsi_set_options", 00:19:13.365 "params": { 00:19:13.365 "timeout_sec": 30 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_nvme_set_options", 00:19:13.365 "params": { 00:19:13.365 "action_on_timeout": "none", 00:19:13.365 "timeout_us": 0, 00:19:13.365 "timeout_admin_us": 0, 00:19:13.365 "keep_alive_timeout_ms": 10000, 00:19:13.365 "arbitration_burst": 0, 00:19:13.365 "low_priority_weight": 0, 00:19:13.365 "medium_priority_weight": 0, 00:19:13.365 "high_priority_weight": 0, 00:19:13.365 "nvme_adminq_poll_period_us": 10000, 00:19:13.365 "nvme_ioq_poll_period_us": 0, 00:19:13.365 "io_queue_requests": 0, 00:19:13.365 "delay_cmd_submit": true, 00:19:13.365 "transport_retry_count": 4, 00:19:13.365 "bdev_retry_count": 3, 00:19:13.365 "transport_ack_timeout": 0, 00:19:13.365 "ctrlr_loss_timeout_sec": 0, 00:19:13.365 "reconnect_delay_sec": 0, 00:19:13.365 "fast_io_fail_timeout_sec": 0, 00:19:13.365 "disable_auto_failback": false, 00:19:13.365 "generate_uuids": false, 00:19:13.365 "transport_tos": 0, 00:19:13.365 "nvme_error_stat": false, 00:19:13.365 "rdma_srq_size": 0, 00:19:13.365 "io_path_stat": false, 00:19:13.365 "allow_accel_sequence": false, 00:19:13.365 "rdma_max_cq_size": 0, 00:19:13.365 "rdma_cm_event_timeout_ms": 0, 00:19:13.365 "dhchap_digests": [ 00:19:13.365 "sha256", 00:19:13.365 "sha384", 00:19:13.365 "sha512" 00:19:13.365 ], 00:19:13.365 "dhchap_dhgroups": [ 00:19:13.365 "null", 00:19:13.365 "ffdhe2048", 00:19:13.365 "ffdhe3072", 00:19:13.365 "ffdhe4096", 00:19:13.365 "ffdhe6144", 00:19:13.365 "ffdhe8192" 00:19:13.365 ] 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_nvme_set_hotplug", 00:19:13.365 "params": { 00:19:13.365 "period_us": 100000, 00:19:13.365 "enable": false 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_malloc_create", 00:19:13.365 "params": { 00:19:13.365 "name": "malloc0", 00:19:13.365 "num_blocks": 8192, 00:19:13.365 "block_size": 4096, 00:19:13.365 "physical_block_size": 4096, 00:19:13.365 "uuid": "914b0224-defd-418b-88d9-957d7aa0de85", 00:19:13.365 "optimal_io_boundary": 0, 00:19:13.365 "md_size": 0, 00:19:13.365 "dif_type": 0, 00:19:13.365 "dif_is_head_of_md": false, 00:19:13.365 "dif_pi_format": 0 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "bdev_wait_for_examine" 00:19:13.365 } 00:19:13.365 ] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "nbd", 00:19:13.365 "config": [] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "scheduler", 00:19:13.365 "config": [ 00:19:13.365 { 00:19:13.365 "method": "framework_set_scheduler", 00:19:13.365 "params": { 00:19:13.365 "name": "static" 00:19:13.365 } 00:19:13.365 } 00:19:13.365 ] 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "subsystem": "nvmf", 00:19:13.365 "config": [ 00:19:13.365 { 00:19:13.365 "method": "nvmf_set_config", 00:19:13.365 "params": { 00:19:13.365 "discovery_filter": "match_any", 00:19:13.365 "admin_cmd_passthru": { 00:19:13.365 "identify_ctrlr": false 00:19:13.365 }, 00:19:13.365 "dhchap_digests": [ 00:19:13.365 "sha256", 00:19:13.365 "sha384", 00:19:13.365 "sha512" 00:19:13.365 ], 00:19:13.365 "dhchap_dhgroups": [ 00:19:13.365 "null", 00:19:13.365 "ffdhe2048", 00:19:13.365 "ffdhe3072", 00:19:13.365 "ffdhe4096", 00:19:13.365 "ffdhe6144", 00:19:13.365 "ffdhe8192" 00:19:13.365 ] 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_set_max_subsystems", 00:19:13.365 "params": { 00:19:13.365 "max_subsystems": 1024 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_set_crdt", 00:19:13.365 "params": { 00:19:13.365 "crdt1": 0, 00:19:13.365 "crdt2": 0, 00:19:13.365 "crdt3": 0 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_create_transport", 00:19:13.365 "params": { 00:19:13.365 "trtype": "TCP", 00:19:13.365 "max_queue_depth": 128, 00:19:13.365 "max_io_qpairs_per_ctrlr": 127, 00:19:13.365 "in_capsule_data_size": 4096, 00:19:13.365 "max_io_size": 131072, 00:19:13.365 "io_unit_size": 131072, 00:19:13.365 "max_aq_depth": 128, 00:19:13.365 "num_shared_buffers": 511, 00:19:13.365 "buf_cache_size": 4294967295, 00:19:13.365 "dif_insert_or_strip": false, 00:19:13.365 "zcopy": false, 00:19:13.365 "c2h_success": false, 00:19:13.365 "sock_priority": 0, 00:19:13.365 "abort_timeout_sec": 1, 00:19:13.365 "ack_timeout": 0, 00:19:13.365 "data_wr_pool_size": 0 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_create_subsystem", 00:19:13.365 "params": { 00:19:13.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.365 "allow_any_host": false, 00:19:13.365 "serial_number": "00000000000000000000", 00:19:13.365 "model_number": "SPDK bdev Controller", 00:19:13.365 "max_namespaces": 32, 00:19:13.365 "min_cntlid": 1, 00:19:13.365 "max_cntlid": 65519, 00:19:13.365 "ana_reporting": false 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_subsystem_add_host", 00:19:13.365 "params": { 00:19:13.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.365 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.365 "psk": "key0" 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_subsystem_add_ns", 00:19:13.365 "params": { 00:19:13.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.365 "namespace": { 00:19:13.365 "nsid": 1, 00:19:13.365 "bdev_name": "malloc0", 00:19:13.365 "nguid": "914B0224DEFD418B88D9957D7AA0DE85", 00:19:13.365 "uuid": "914b0224-defd-418b-88d9-957d7aa0de85", 00:19:13.365 "no_auto_visible": false 00:19:13.365 } 00:19:13.365 } 00:19:13.365 }, 00:19:13.365 { 00:19:13.365 "method": "nvmf_subsystem_add_listener", 00:19:13.365 "params": { 00:19:13.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.365 "listen_address": { 00:19:13.365 "trtype": "TCP", 00:19:13.365 "adrfam": "IPv4", 00:19:13.365 "traddr": "10.0.0.2", 00:19:13.366 "trsvcid": "4420" 00:19:13.366 }, 00:19:13.366 "secure_channel": false, 00:19:13.366 "sock_impl": "ssl" 00:19:13.366 } 00:19:13.366 } 00:19:13.366 ] 00:19:13.366 } 00:19:13.366 ] 00:19:13.366 }' 00:19:13.366 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:13.625 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:13.625 "subsystems": [ 00:19:13.625 { 00:19:13.625 "subsystem": "keyring", 00:19:13.625 "config": [ 00:19:13.625 { 00:19:13.625 "method": "keyring_file_add_key", 00:19:13.625 "params": { 00:19:13.625 "name": "key0", 00:19:13.625 "path": "/tmp/tmp.2CObFda196" 00:19:13.625 } 00:19:13.625 } 00:19:13.625 ] 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "subsystem": "iobuf", 00:19:13.625 "config": [ 00:19:13.625 { 00:19:13.625 "method": "iobuf_set_options", 00:19:13.625 "params": { 00:19:13.625 "small_pool_count": 8192, 00:19:13.625 "large_pool_count": 1024, 00:19:13.625 "small_bufsize": 8192, 00:19:13.625 "large_bufsize": 135168, 00:19:13.625 "enable_numa": false 00:19:13.625 } 00:19:13.625 } 00:19:13.625 ] 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "subsystem": "sock", 00:19:13.625 "config": [ 00:19:13.625 { 00:19:13.625 "method": "sock_set_default_impl", 00:19:13.625 "params": { 00:19:13.625 "impl_name": "posix" 00:19:13.625 } 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "method": "sock_impl_set_options", 00:19:13.625 "params": { 00:19:13.625 "impl_name": "ssl", 00:19:13.625 "recv_buf_size": 4096, 00:19:13.625 "send_buf_size": 4096, 00:19:13.625 "enable_recv_pipe": true, 00:19:13.625 "enable_quickack": false, 00:19:13.625 "enable_placement_id": 0, 00:19:13.625 "enable_zerocopy_send_server": true, 00:19:13.625 "enable_zerocopy_send_client": false, 00:19:13.625 "zerocopy_threshold": 0, 00:19:13.625 "tls_version": 0, 00:19:13.625 "enable_ktls": false 00:19:13.625 } 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "method": "sock_impl_set_options", 00:19:13.625 "params": { 00:19:13.625 "impl_name": "posix", 00:19:13.625 "recv_buf_size": 2097152, 00:19:13.625 "send_buf_size": 2097152, 00:19:13.625 "enable_recv_pipe": true, 00:19:13.625 "enable_quickack": false, 00:19:13.625 "enable_placement_id": 0, 00:19:13.625 "enable_zerocopy_send_server": true, 00:19:13.625 "enable_zerocopy_send_client": false, 00:19:13.625 "zerocopy_threshold": 0, 00:19:13.625 "tls_version": 0, 00:19:13.625 "enable_ktls": false 00:19:13.625 } 00:19:13.625 } 00:19:13.625 ] 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "subsystem": "vmd", 00:19:13.625 "config": [] 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "subsystem": "accel", 00:19:13.625 "config": [ 00:19:13.625 { 00:19:13.625 "method": "accel_set_options", 00:19:13.625 "params": { 00:19:13.625 "small_cache_size": 128, 00:19:13.625 "large_cache_size": 16, 00:19:13.625 "task_count": 2048, 00:19:13.625 "sequence_count": 2048, 00:19:13.625 "buf_count": 2048 00:19:13.625 } 00:19:13.625 } 00:19:13.625 ] 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "subsystem": "bdev", 00:19:13.625 "config": [ 00:19:13.625 { 00:19:13.625 "method": "bdev_set_options", 00:19:13.625 "params": { 00:19:13.625 "bdev_io_pool_size": 65535, 00:19:13.625 "bdev_io_cache_size": 256, 00:19:13.625 "bdev_auto_examine": true, 00:19:13.625 "iobuf_small_cache_size": 128, 00:19:13.625 "iobuf_large_cache_size": 16 00:19:13.625 } 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "method": "bdev_raid_set_options", 00:19:13.625 "params": { 00:19:13.625 "process_window_size_kb": 1024, 00:19:13.625 "process_max_bandwidth_mb_sec": 0 00:19:13.625 } 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "method": "bdev_iscsi_set_options", 00:19:13.625 "params": { 00:19:13.625 "timeout_sec": 30 00:19:13.625 } 00:19:13.625 }, 00:19:13.625 { 00:19:13.625 "method": "bdev_nvme_set_options", 00:19:13.626 "params": { 00:19:13.626 "action_on_timeout": "none", 00:19:13.626 "timeout_us": 0, 00:19:13.626 "timeout_admin_us": 0, 00:19:13.626 "keep_alive_timeout_ms": 10000, 00:19:13.626 "arbitration_burst": 0, 00:19:13.626 "low_priority_weight": 0, 00:19:13.626 "medium_priority_weight": 0, 00:19:13.626 "high_priority_weight": 0, 00:19:13.626 "nvme_adminq_poll_period_us": 10000, 00:19:13.626 "nvme_ioq_poll_period_us": 0, 00:19:13.626 "io_queue_requests": 512, 00:19:13.626 "delay_cmd_submit": true, 00:19:13.626 "transport_retry_count": 4, 00:19:13.626 "bdev_retry_count": 3, 00:19:13.626 "transport_ack_timeout": 0, 00:19:13.626 "ctrlr_loss_timeout_sec": 0, 00:19:13.626 "reconnect_delay_sec": 0, 00:19:13.626 "fast_io_fail_timeout_sec": 0, 00:19:13.626 "disable_auto_failback": false, 00:19:13.626 "generate_uuids": false, 00:19:13.626 "transport_tos": 0, 00:19:13.626 "nvme_error_stat": false, 00:19:13.626 "rdma_srq_size": 0, 00:19:13.626 "io_path_stat": false, 00:19:13.626 "allow_accel_sequence": false, 00:19:13.626 "rdma_max_cq_size": 0, 00:19:13.626 "rdma_cm_event_timeout_ms": 0, 00:19:13.626 "dhchap_digests": [ 00:19:13.626 "sha256", 00:19:13.626 "sha384", 00:19:13.626 "sha512" 00:19:13.626 ], 00:19:13.626 "dhchap_dhgroups": [ 00:19:13.626 "null", 00:19:13.626 "ffdhe2048", 00:19:13.626 "ffdhe3072", 00:19:13.626 "ffdhe4096", 00:19:13.626 "ffdhe6144", 00:19:13.626 "ffdhe8192" 00:19:13.626 ] 00:19:13.626 } 00:19:13.626 }, 00:19:13.626 { 00:19:13.626 "method": "bdev_nvme_attach_controller", 00:19:13.626 "params": { 00:19:13.626 "name": "nvme0", 00:19:13.626 "trtype": "TCP", 00:19:13.626 "adrfam": "IPv4", 00:19:13.626 "traddr": "10.0.0.2", 00:19:13.626 "trsvcid": "4420", 00:19:13.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.626 "prchk_reftag": false, 00:19:13.626 "prchk_guard": false, 00:19:13.626 "ctrlr_loss_timeout_sec": 0, 00:19:13.626 "reconnect_delay_sec": 0, 00:19:13.626 "fast_io_fail_timeout_sec": 0, 00:19:13.626 "psk": "key0", 00:19:13.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.626 "hdgst": false, 00:19:13.626 "ddgst": false, 00:19:13.626 "multipath": "multipath" 00:19:13.626 } 00:19:13.626 }, 00:19:13.626 { 00:19:13.626 "method": "bdev_nvme_set_hotplug", 00:19:13.626 "params": { 00:19:13.626 "period_us": 100000, 00:19:13.626 "enable": false 00:19:13.626 } 00:19:13.626 }, 00:19:13.626 { 00:19:13.626 "method": "bdev_enable_histogram", 00:19:13.626 "params": { 00:19:13.626 "name": "nvme0n1", 00:19:13.626 "enable": true 00:19:13.626 } 00:19:13.626 }, 00:19:13.626 { 00:19:13.626 "method": "bdev_wait_for_examine" 00:19:13.626 } 00:19:13.626 ] 00:19:13.626 }, 00:19:13.626 { 00:19:13.626 "subsystem": "nbd", 00:19:13.626 "config": [] 00:19:13.626 } 00:19:13.626 ] 00:19:13.626 }' 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3997950 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3997950 ']' 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3997950 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997950 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997950' 00:19:13.626 killing process with pid 3997950 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3997950 00:19:13.626 Received shutdown signal, test time was about 1.000000 seconds 00:19:13.626 00:19:13.626 Latency(us) 00:19:13.626 [2024-11-26T20:00:04.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.626 [2024-11-26T20:00:04.564Z] =================================================================================================================== 00:19:13.626 [2024-11-26T20:00:04.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.626 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3997950 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3997906 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3997906 ']' 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3997906 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997906 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997906' 00:19:13.885 killing process with pid 3997906 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3997906 00:19:13.885 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3997906 00:19:14.142 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:14.142 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.142 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:14.142 "subsystems": [ 00:19:14.142 { 00:19:14.142 "subsystem": "keyring", 00:19:14.142 "config": [ 00:19:14.142 { 00:19:14.142 "method": "keyring_file_add_key", 00:19:14.142 "params": { 00:19:14.142 "name": "key0", 00:19:14.142 "path": "/tmp/tmp.2CObFda196" 00:19:14.142 } 00:19:14.142 } 00:19:14.142 ] 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "subsystem": "iobuf", 00:19:14.142 "config": [ 00:19:14.142 { 00:19:14.142 "method": "iobuf_set_options", 00:19:14.142 "params": { 00:19:14.142 "small_pool_count": 8192, 00:19:14.142 "large_pool_count": 1024, 00:19:14.142 "small_bufsize": 8192, 00:19:14.142 "large_bufsize": 135168, 00:19:14.142 "enable_numa": false 00:19:14.142 } 00:19:14.142 } 00:19:14.142 ] 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "subsystem": "sock", 00:19:14.142 "config": [ 00:19:14.142 { 00:19:14.142 "method": "sock_set_default_impl", 00:19:14.142 "params": { 00:19:14.142 "impl_name": "posix" 00:19:14.142 } 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "method": "sock_impl_set_options", 00:19:14.142 "params": { 00:19:14.142 "impl_name": "ssl", 00:19:14.142 "recv_buf_size": 4096, 00:19:14.142 "send_buf_size": 4096, 00:19:14.142 "enable_recv_pipe": true, 00:19:14.142 "enable_quickack": false, 00:19:14.142 "enable_placement_id": 0, 00:19:14.142 "enable_zerocopy_send_server": true, 00:19:14.142 "enable_zerocopy_send_client": false, 00:19:14.142 "zerocopy_threshold": 0, 00:19:14.142 "tls_version": 0, 00:19:14.142 "enable_ktls": false 00:19:14.142 } 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "method": "sock_impl_set_options", 00:19:14.142 "params": { 00:19:14.142 "impl_name": "posix", 00:19:14.142 "recv_buf_size": 2097152, 00:19:14.142 "send_buf_size": 2097152, 00:19:14.142 "enable_recv_pipe": true, 00:19:14.142 "enable_quickack": false, 00:19:14.142 "enable_placement_id": 0, 00:19:14.142 "enable_zerocopy_send_server": true, 00:19:14.142 "enable_zerocopy_send_client": false, 00:19:14.142 "zerocopy_threshold": 0, 00:19:14.142 "tls_version": 0, 00:19:14.142 "enable_ktls": false 00:19:14.142 } 00:19:14.142 } 00:19:14.142 ] 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "subsystem": "vmd", 00:19:14.142 "config": [] 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "subsystem": "accel", 00:19:14.142 "config": [ 00:19:14.142 { 00:19:14.142 "method": "accel_set_options", 00:19:14.142 "params": { 00:19:14.142 "small_cache_size": 128, 00:19:14.142 "large_cache_size": 16, 00:19:14.142 "task_count": 2048, 00:19:14.142 "sequence_count": 2048, 00:19:14.142 "buf_count": 2048 00:19:14.142 } 00:19:14.142 } 00:19:14.142 ] 00:19:14.142 }, 00:19:14.142 { 00:19:14.142 "subsystem": "bdev", 00:19:14.142 "config": [ 00:19:14.142 { 00:19:14.142 "method": "bdev_set_options", 00:19:14.142 "params": { 00:19:14.142 "bdev_io_pool_size": 65535, 00:19:14.142 "bdev_io_cache_size": 256, 00:19:14.142 "bdev_auto_examine": true, 00:19:14.142 "iobuf_small_cache_size": 128, 00:19:14.143 "iobuf_large_cache_size": 16 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_raid_set_options", 00:19:14.143 "params": { 00:19:14.143 "process_window_size_kb": 1024, 00:19:14.143 "process_max_bandwidth_mb_sec": 0 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_iscsi_set_options", 00:19:14.143 "params": { 00:19:14.143 "timeout_sec": 30 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_nvme_set_options", 00:19:14.143 "params": { 00:19:14.143 "action_on_timeout": "none", 00:19:14.143 "timeout_us": 0, 00:19:14.143 "timeout_admin_us": 0, 00:19:14.143 "keep_alive_timeout_ms": 10000, 00:19:14.143 "arbitration_burst": 0, 00:19:14.143 "low_priority_weight": 0, 00:19:14.143 "medium_priority_weight": 0, 00:19:14.143 "high_priority_weight": 0, 00:19:14.143 "nvme_adminq_poll_period_us": 10000, 00:19:14.143 "nvme_ioq_poll_period_us": 0, 00:19:14.143 "io_queue_requests": 0, 00:19:14.143 "delay_cmd_submit": true, 00:19:14.143 "transport_retry_count": 4, 00:19:14.143 "bdev_retry_count": 3, 00:19:14.143 "transport_ack_timeout": 0, 00:19:14.143 "ctrlr_loss_timeout_sec": 0, 00:19:14.143 "reconnect_delay_sec": 0, 00:19:14.143 "fast_io_fail_timeout_sec": 0, 00:19:14.143 "disable_auto_failback": false, 00:19:14.143 "generate_uuids": false, 00:19:14.143 "transport_tos": 0, 00:19:14.143 "nvme_error_stat": false, 00:19:14.143 "rdma_srq_size": 0, 00:19:14.143 "io_path_stat": false, 00:19:14.143 "allow_accel_sequence": false, 00:19:14.143 "rdma_max_cq_size": 0, 00:19:14.143 "rdma_cm_event_timeout_ms": 0, 00:19:14.143 "dhchap_digests": [ 00:19:14.143 "sha256", 00:19:14.143 "sha384", 00:19:14.143 "sha512" 00:19:14.143 ], 00:19:14.143 "dhchap_dhgroups": [ 00:19:14.143 "null", 00:19:14.143 "ffdhe2048", 00:19:14.143 "ffdhe3072", 00:19:14.143 "ffdhe4096", 00:19:14.143 "ffdhe6144", 00:19:14.143 "ffdhe8192" 00:19:14.143 ] 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_nvme_set_hotplug", 00:19:14.143 "params": { 00:19:14.143 "period_us": 100000, 00:19:14.143 "enable": false 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_malloc_create", 00:19:14.143 "params": { 00:19:14.143 "name": "malloc0", 00:19:14.143 "num_blocks": 8192, 00:19:14.143 "block_size": 4096, 00:19:14.143 "physical_block_size": 4096, 00:19:14.143 "uuid": "914b0224-defd-418b-88d9-957d7aa0de85", 00:19:14.143 "optimal_io_boundary": 0, 00:19:14.143 "md_size": 0, 00:19:14.143 "dif_type": 0, 00:19:14.143 "dif_is_head_of_md": false, 00:19:14.143 "dif_pi_format": 0 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "bdev_wait_for_examine" 00:19:14.143 } 00:19:14.143 ] 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "subsystem": "nbd", 00:19:14.143 "config": [] 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "subsystem": "scheduler", 00:19:14.143 "config": [ 00:19:14.143 { 00:19:14.143 "method": "framework_set_scheduler", 00:19:14.143 "params": { 00:19:14.143 "name": "static" 00:19:14.143 } 00:19:14.143 } 00:19:14.143 ] 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "subsystem": "nvmf", 00:19:14.143 "config": [ 00:19:14.143 { 00:19:14.143 "method": "nvmf_set_config", 00:19:14.143 "params": { 00:19:14.143 "discovery_filter": "match_any", 00:19:14.143 "admin_cmd_passthru": { 00:19:14.143 "identify_ctrlr": false 00:19:14.143 }, 00:19:14.143 "dhchap_digests": [ 00:19:14.143 "sha256", 00:19:14.143 "sha384", 00:19:14.143 "sha512" 00:19:14.143 ], 00:19:14.143 "dhchap_dhgroups": [ 00:19:14.143 "null", 00:19:14.143 "ffdhe2048", 00:19:14.143 "ffdhe3072", 00:19:14.143 "ffdhe4096", 00:19:14.143 "ffdhe6144", 00:19:14.143 "ffdhe8192" 00:19:14.143 ] 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_set_max_subsystems", 00:19:14.143 "params": { 00:19:14.143 "max_subsystems": 1024 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_set_crdt", 00:19:14.143 "params": { 00:19:14.143 "crdt1": 0, 00:19:14.143 "crdt2": 0, 00:19:14.143 "crdt3": 0 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_create_transport", 00:19:14.143 "params": { 00:19:14.143 "trtype": "TCP", 00:19:14.143 "max_queue_depth": 128, 00:19:14.143 "max_io_qpairs_per_ctrlr": 127, 00:19:14.143 "in_capsule_data_size": 4096, 00:19:14.143 "max_io_size": 131072, 00:19:14.143 "io_unit_size": 131072, 00:19:14.143 "max_aq_depth": 128, 00:19:14.143 "num_shared_buffers": 511, 00:19:14.143 "buf_cache_size": 4294967295, 00:19:14.143 "dif_insert_or_strip": false, 00:19:14.143 "zcopy": false, 00:19:14.143 "c2h_success": false, 00:19:14.143 "sock_priority": 0, 00:19:14.143 "abort_timeout_sec": 1, 00:19:14.143 "ack_timeout": 0, 00:19:14.143 "data_wr_pool_size": 0 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_create_subsystem", 00:19:14.143 "params": { 00:19:14.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.143 "allow_any_host": false, 00:19:14.143 "serial_number": "00000000000000000000", 00:19:14.143 "model_number": "SPDK bdev Controller", 00:19:14.143 "max_namespaces": 32, 00:19:14.143 "min_cntlid": 1, 00:19:14.143 "max_cntlid": 65519, 00:19:14.143 "ana_reporting": false 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_subsystem_add_host", 00:19:14.143 "params": { 00:19:14.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.143 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.143 "psk": "key0" 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_subsystem_add_ns", 00:19:14.143 "params": { 00:19:14.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.143 "namespace": { 00:19:14.143 "nsid": 1, 00:19:14.143 "bdev_name": "malloc0", 00:19:14.143 "nguid": "914B0224DEFD418B88D9957D7AA0DE85", 00:19:14.143 "uuid": "914b0224-defd-418b-88d9-957d7aa0de85", 00:19:14.143 "no_auto_visible": false 00:19:14.143 } 00:19:14.143 } 00:19:14.143 }, 00:19:14.143 { 00:19:14.143 "method": "nvmf_subsystem_add_listener", 00:19:14.143 "params": { 00:19:14.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.143 "listen_address": { 00:19:14.143 "trtype": "TCP", 00:19:14.143 "adrfam": "IPv4", 00:19:14.143 "traddr": "10.0.0.2", 00:19:14.143 "trsvcid": "4420" 00:19:14.143 }, 00:19:14.143 "secure_channel": false, 00:19:14.143 "sock_impl": "ssl" 00:19:14.143 } 00:19:14.143 } 00:19:14.143 ] 00:19:14.143 } 00:19:14.143 ] 00:19:14.143 }' 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3998366 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3998366 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3998366 ']' 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.143 21:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.143 [2024-11-26 21:00:04.982470] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:14.143 [2024-11-26 21:00:04.982553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.143 [2024-11-26 21:00:05.059111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.401 [2024-11-26 21:00:05.119866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.401 [2024-11-26 21:00:05.119913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.401 [2024-11-26 21:00:05.119928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.401 [2024-11-26 21:00:05.119940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.401 [2024-11-26 21:00:05.119950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.401 [2024-11-26 21:00:05.120574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.659 [2024-11-26 21:00:05.374580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.659 [2024-11-26 21:00:05.406606] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.659 [2024-11-26 21:00:05.406921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.226 21:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.226 21:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.226 21:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.226 21:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.226 21:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3998519 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3998519 /var/tmp/bdevperf.sock 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3998519 ']' 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.226 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:15.226 "subsystems": [ 00:19:15.226 { 00:19:15.226 "subsystem": "keyring", 00:19:15.226 "config": [ 00:19:15.226 { 00:19:15.226 "method": "keyring_file_add_key", 00:19:15.226 "params": { 00:19:15.226 "name": "key0", 00:19:15.226 "path": "/tmp/tmp.2CObFda196" 00:19:15.226 } 00:19:15.226 } 00:19:15.226 ] 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "subsystem": "iobuf", 00:19:15.226 "config": [ 00:19:15.226 { 00:19:15.226 "method": "iobuf_set_options", 00:19:15.226 "params": { 00:19:15.226 "small_pool_count": 8192, 00:19:15.226 "large_pool_count": 1024, 00:19:15.226 "small_bufsize": 8192, 00:19:15.226 "large_bufsize": 135168, 00:19:15.226 "enable_numa": false 00:19:15.226 } 00:19:15.226 } 00:19:15.226 ] 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "subsystem": "sock", 00:19:15.226 "config": [ 00:19:15.226 { 00:19:15.226 "method": "sock_set_default_impl", 00:19:15.226 "params": { 00:19:15.226 "impl_name": "posix" 00:19:15.226 } 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "method": "sock_impl_set_options", 00:19:15.226 "params": { 00:19:15.226 "impl_name": "ssl", 00:19:15.226 "recv_buf_size": 4096, 00:19:15.226 "send_buf_size": 4096, 00:19:15.226 "enable_recv_pipe": true, 00:19:15.226 "enable_quickack": false, 00:19:15.226 "enable_placement_id": 0, 00:19:15.226 "enable_zerocopy_send_server": true, 00:19:15.226 "enable_zerocopy_send_client": false, 00:19:15.226 "zerocopy_threshold": 0, 00:19:15.226 "tls_version": 0, 00:19:15.226 "enable_ktls": false 00:19:15.226 } 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "method": "sock_impl_set_options", 00:19:15.226 "params": { 00:19:15.226 "impl_name": "posix", 00:19:15.226 "recv_buf_size": 2097152, 00:19:15.226 "send_buf_size": 2097152, 00:19:15.226 "enable_recv_pipe": true, 00:19:15.226 "enable_quickack": false, 00:19:15.226 "enable_placement_id": 0, 00:19:15.226 "enable_zerocopy_send_server": true, 00:19:15.226 "enable_zerocopy_send_client": false, 00:19:15.226 "zerocopy_threshold": 0, 00:19:15.226 "tls_version": 0, 00:19:15.226 "enable_ktls": false 00:19:15.226 } 00:19:15.226 } 00:19:15.226 ] 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "subsystem": "vmd", 00:19:15.226 "config": [] 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "subsystem": "accel", 00:19:15.226 "config": [ 00:19:15.226 { 00:19:15.226 "method": "accel_set_options", 00:19:15.226 "params": { 00:19:15.226 "small_cache_size": 128, 00:19:15.226 "large_cache_size": 16, 00:19:15.226 "task_count": 2048, 00:19:15.226 "sequence_count": 2048, 00:19:15.226 "buf_count": 2048 00:19:15.226 } 00:19:15.226 } 00:19:15.226 ] 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "subsystem": "bdev", 00:19:15.226 "config": [ 00:19:15.226 { 00:19:15.226 "method": "bdev_set_options", 00:19:15.226 "params": { 00:19:15.226 "bdev_io_pool_size": 65535, 00:19:15.226 "bdev_io_cache_size": 256, 00:19:15.226 "bdev_auto_examine": true, 00:19:15.226 "iobuf_small_cache_size": 128, 00:19:15.226 "iobuf_large_cache_size": 16 00:19:15.226 } 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "method": "bdev_raid_set_options", 00:19:15.226 "params": { 00:19:15.226 "process_window_size_kb": 1024, 00:19:15.226 "process_max_bandwidth_mb_sec": 0 00:19:15.226 } 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "method": "bdev_iscsi_set_options", 00:19:15.226 "params": { 00:19:15.226 "timeout_sec": 30 00:19:15.226 } 00:19:15.226 }, 00:19:15.226 { 00:19:15.226 "method": "bdev_nvme_set_options", 00:19:15.226 "params": { 00:19:15.226 "action_on_timeout": "none", 00:19:15.226 "timeout_us": 0, 00:19:15.226 "timeout_admin_us": 0, 00:19:15.226 "keep_alive_timeout_ms": 10000, 00:19:15.226 "arbitration_burst": 0, 00:19:15.226 "low_priority_weight": 0, 00:19:15.226 "medium_priority_weight": 0, 00:19:15.226 "high_priority_weight": 0, 00:19:15.227 "nvme_adminq_poll_period_us": 10000, 00:19:15.227 "nvme_ioq_poll_period_us": 0, 00:19:15.227 "io_queue_requests": 512, 00:19:15.227 "delay_cmd_submit": true, 00:19:15.227 "transport_retry_count": 4, 00:19:15.227 "bdev_retry_count": 3, 00:19:15.227 "transport_ack_timeout": 0, 00:19:15.227 "ctrlr_loss_timeout_sec": 0, 00:19:15.227 "reconnect_delay_sec": 0, 00:19:15.227 "fast_io_fail_timeout_sec": 0, 00:19:15.227 "disable_auto_failback": false, 00:19:15.227 "generate_uuids": false, 00:19:15.227 "transport_tos": 0, 00:19:15.227 "nvme_error_stat": false, 00:19:15.227 "rdma_srq_size": 0, 00:19:15.227 "io_path_stat": false, 00:19:15.227 "allow_accel_sequence": false, 00:19:15.227 "rdma_max_cq_size": 0, 00:19:15.227 "rdma_cm_event_timeout_ms": 0, 00:19:15.227 "dhchap_digests": [ 00:19:15.227 "sha256", 00:19:15.227 "sha384", 00:19:15.227 "sha512" 00:19:15.227 ], 00:19:15.227 "dhchap_dhgroups": [ 00:19:15.227 "null", 00:19:15.227 "ffdhe2048", 00:19:15.227 "ffdhe3072", 00:19:15.227 "ffdhe4096", 00:19:15.227 "ffdhe6144", 00:19:15.227 "ffdhe8192" 00:19:15.227 ] 00:19:15.227 } 00:19:15.227 }, 00:19:15.227 { 00:19:15.227 "method": "bdev_nvme_attach_controller", 00:19:15.227 "params": { 00:19:15.227 "name": "nvme0", 00:19:15.227 "trtype": "TCP", 00:19:15.227 "adrfam": "IPv4", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.227 "prchk_reftag": false, 00:19:15.227 "prchk_guard": false, 00:19:15.227 "ctrlr_loss_timeout_sec": 0, 00:19:15.227 "reconnect_delay_sec": 0, 00:19:15.227 "fast_io_fail_timeout_sec": 0, 00:19:15.227 "psk": "key0", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false, 00:19:15.227 "multipath": "multipath" 00:19:15.227 } 00:19:15.227 }, 00:19:15.227 { 00:19:15.227 "method": "bdev_nvme_set_hotplug", 00:19:15.227 "params": { 00:19:15.227 "period_us": 100000, 00:19:15.227 "enable": false 00:19:15.227 } 00:19:15.227 }, 00:19:15.227 { 00:19:15.227 "method": "bdev_enable_histogram", 00:19:15.227 "params": { 00:19:15.227 "name": "nvme0n1", 00:19:15.227 "enable": true 00:19:15.227 } 00:19:15.227 }, 00:19:15.227 { 00:19:15.227 "method": "bdev_wait_for_examine" 00:19:15.227 } 00:19:15.227 ] 00:19:15.227 }, 00:19:15.227 { 00:19:15.227 "subsystem": "nbd", 00:19:15.227 "config": [] 00:19:15.227 } 00:19:15.227 ] 00:19:15.227 }' 00:19:15.227 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.227 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.227 21:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.227 [2024-11-26 21:00:06.064334] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:15.227 [2024-11-26 21:00:06.064410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998519 ] 00:19:15.227 [2024-11-26 21:00:06.137936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.486 [2024-11-26 21:00:06.202653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.486 [2024-11-26 21:00:06.393233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.420 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.420 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.420 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:16.420 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:16.676 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.676 21:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.676 Running I/O for 1 seconds... 00:19:17.607 1748.00 IOPS, 6.83 MiB/s 00:19:17.607 Latency(us) 00:19:17.607 [2024-11-26T20:00:08.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.607 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:17.607 Verification LBA range: start 0x0 length 0x2000 00:19:17.607 nvme0n1 : 1.05 1789.34 6.99 0.00 0.00 70055.27 6747.78 59030.95 00:19:17.607 [2024-11-26T20:00:08.545Z] =================================================================================================================== 00:19:17.607 [2024-11-26T20:00:08.545Z] Total : 1789.34 6.99 0.00 0.00 70055.27 6747.78 59030.95 00:19:17.607 { 00:19:17.607 "results": [ 00:19:17.607 { 00:19:17.607 "job": "nvme0n1", 00:19:17.607 "core_mask": "0x2", 00:19:17.607 "workload": "verify", 00:19:17.607 "status": "finished", 00:19:17.607 "verify_range": { 00:19:17.607 "start": 0, 00:19:17.607 "length": 8192 00:19:17.607 }, 00:19:17.607 "queue_depth": 128, 00:19:17.607 "io_size": 4096, 00:19:17.607 "runtime": 1.048429, 00:19:17.607 "iops": 1789.3438659174822, 00:19:17.607 "mibps": 6.989624476240165, 00:19:17.607 "io_failed": 0, 00:19:17.607 "io_timeout": 0, 00:19:17.607 "avg_latency_us": 70055.26950643607, 00:19:17.607 "min_latency_us": 6747.780740740741, 00:19:17.608 "max_latency_us": 59030.945185185185 00:19:17.608 } 00:19:17.608 ], 00:19:17.608 "core_count": 1 00:19:17.608 } 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:17.608 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.867 nvmf_trace.0 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3998519 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3998519 ']' 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3998519 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998519 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998519' 00:19:17.867 killing process with pid 3998519 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3998519 00:19:17.867 Received shutdown signal, test time was about 1.000000 seconds 00:19:17.867 00:19:17.867 Latency(us) 00:19:17.867 [2024-11-26T20:00:08.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.867 [2024-11-26T20:00:08.805Z] =================================================================================================================== 00:19:17.867 [2024-11-26T20:00:08.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.867 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3998519 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.126 rmmod nvme_tcp 00:19:18.126 rmmod nvme_fabrics 00:19:18.126 rmmod nvme_keyring 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3998366 ']' 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3998366 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3998366 ']' 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3998366 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998366 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998366' 00:19:18.126 killing process with pid 3998366 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3998366 00:19:18.126 21:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3998366 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.386 21:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.922 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.cGNyNYf3DZ /tmp/tmp.iQNONcyT4r /tmp/tmp.2CObFda196 00:19:20.923 00:19:20.923 real 1m24.431s 00:19:20.923 user 2m18.537s 00:19:20.923 sys 0m26.610s 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.923 ************************************ 00:19:20.923 END TEST nvmf_tls 00:19:20.923 ************************************ 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.923 ************************************ 00:19:20.923 START TEST nvmf_fips 00:19:20.923 ************************************ 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:20.923 * Looking for test storage... 00:19:20.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.923 --rc genhtml_branch_coverage=1 00:19:20.923 --rc genhtml_function_coverage=1 00:19:20.923 --rc genhtml_legend=1 00:19:20.923 --rc geninfo_all_blocks=1 00:19:20.923 --rc geninfo_unexecuted_blocks=1 00:19:20.923 00:19:20.923 ' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.923 --rc genhtml_branch_coverage=1 00:19:20.923 --rc genhtml_function_coverage=1 00:19:20.923 --rc genhtml_legend=1 00:19:20.923 --rc geninfo_all_blocks=1 00:19:20.923 --rc geninfo_unexecuted_blocks=1 00:19:20.923 00:19:20.923 ' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.923 --rc genhtml_branch_coverage=1 00:19:20.923 --rc genhtml_function_coverage=1 00:19:20.923 --rc genhtml_legend=1 00:19:20.923 --rc geninfo_all_blocks=1 00:19:20.923 --rc geninfo_unexecuted_blocks=1 00:19:20.923 00:19:20.923 ' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.923 --rc genhtml_branch_coverage=1 00:19:20.923 --rc genhtml_function_coverage=1 00:19:20.923 --rc genhtml_legend=1 00:19:20.923 --rc geninfo_all_blocks=1 00:19:20.923 --rc geninfo_unexecuted_blocks=1 00:19:20.923 00:19:20.923 ' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.923 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:20.924 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:20.925 Error setting digest 00:19:20.925 4002F912F77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:20.925 4002F912F77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.925 21:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.830 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.830 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.830 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.831 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.089 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.089 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.089 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:23.089 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:23.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:19:23.089 00:19:23.089 --- 10.0.0.2 ping statistics --- 00:19:23.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.089 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:23.089 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:19:23.089 00:19:23.090 --- 10.0.0.1 ping statistics --- 00:19:23.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.090 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=4001391 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 4001391 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4001391 ']' 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.090 21:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.090 [2024-11-26 21:00:13.893958] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:23.090 [2024-11-26 21:00:13.894051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.090 [2024-11-26 21:00:13.961562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.090 [2024-11-26 21:00:14.013884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.090 [2024-11-26 21:00:14.013953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.090 [2024-11-26 21:00:14.013981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.090 [2024-11-26 21:00:14.013992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.090 [2024-11-26 21:00:14.014001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.090 [2024-11-26 21:00:14.014616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.D4Q 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.D4Q 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.D4Q 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.D4Q 00:19:23.348 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:23.607 [2024-11-26 21:00:14.469191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.607 [2024-11-26 21:00:14.485188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.607 [2024-11-26 21:00:14.485469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.607 malloc0 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=4001429 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 4001429 /var/tmp/bdevperf.sock 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4001429 ']' 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.865 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:23.865 [2024-11-26 21:00:14.623096] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:23.865 [2024-11-26 21:00:14.623177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001429 ] 00:19:23.865 [2024-11-26 21:00:14.689088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.865 [2024-11-26 21:00:14.747284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.123 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.123 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:24.123 21:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.D4Q 00:19:24.381 21:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.639 [2024-11-26 21:00:15.369067] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.639 TLSTESTn1 00:19:24.639 21:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.899 Running I/O for 10 seconds... 00:19:26.765 3423.00 IOPS, 13.37 MiB/s [2024-11-26T20:00:18.638Z] 3519.00 IOPS, 13.75 MiB/s [2024-11-26T20:00:20.010Z] 3532.33 IOPS, 13.80 MiB/s [2024-11-26T20:00:20.945Z] 3539.75 IOPS, 13.83 MiB/s [2024-11-26T20:00:21.879Z] 3555.60 IOPS, 13.89 MiB/s [2024-11-26T20:00:22.812Z] 3495.83 IOPS, 13.66 MiB/s [2024-11-26T20:00:23.743Z] 3434.29 IOPS, 13.42 MiB/s [2024-11-26T20:00:24.677Z] 3393.75 IOPS, 13.26 MiB/s [2024-11-26T20:00:25.614Z] 3361.89 IOPS, 13.13 MiB/s [2024-11-26T20:00:25.873Z] 3338.80 IOPS, 13.04 MiB/s 00:19:34.935 Latency(us) 00:19:34.935 [2024-11-26T20:00:25.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.935 Verification LBA range: start 0x0 length 0x2000 00:19:34.935 TLSTESTn1 : 10.03 3342.33 13.06 0.00 0.00 38226.39 6213.78 46797.56 00:19:34.935 [2024-11-26T20:00:25.873Z] =================================================================================================================== 00:19:34.935 [2024-11-26T20:00:25.873Z] Total : 3342.33 13.06 0.00 0.00 38226.39 6213.78 46797.56 00:19:34.935 { 00:19:34.935 "results": [ 00:19:34.935 { 00:19:34.935 "job": "TLSTESTn1", 00:19:34.935 "core_mask": "0x4", 00:19:34.935 "workload": "verify", 00:19:34.935 "status": "finished", 00:19:34.935 "verify_range": { 00:19:34.935 "start": 0, 00:19:34.935 "length": 8192 00:19:34.935 }, 00:19:34.935 "queue_depth": 128, 00:19:34.935 "io_size": 4096, 00:19:34.935 "runtime": 10.027742, 00:19:34.935 "iops": 3342.32771445456, 00:19:34.935 "mibps": 13.055967634588125, 00:19:34.935 "io_failed": 0, 00:19:34.935 "io_timeout": 0, 00:19:34.935 "avg_latency_us": 38226.39232013013, 00:19:34.935 "min_latency_us": 6213.783703703703, 00:19:34.935 "max_latency_us": 46797.55851851852 00:19:34.935 } 00:19:34.935 ], 00:19:34.935 "core_count": 1 00:19:34.935 } 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:34.935 nvmf_trace.0 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4001429 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4001429 ']' 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4001429 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4001429 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4001429' 00:19:34.935 killing process with pid 4001429 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4001429 00:19:34.935 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.935 00:19:34.935 Latency(us) 00:19:34.935 [2024-11-26T20:00:25.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.935 [2024-11-26T20:00:25.873Z] =================================================================================================================== 00:19:34.935 [2024-11-26T20:00:25.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.935 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4001429 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.194 21:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.194 rmmod nvme_tcp 00:19:35.194 rmmod nvme_fabrics 00:19:35.194 rmmod nvme_keyring 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 4001391 ']' 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 4001391 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4001391 ']' 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4001391 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4001391 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.194 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.195 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4001391' 00:19:35.195 killing process with pid 4001391 00:19:35.195 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4001391 00:19:35.195 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4001391 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.454 21:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.D4Q 00:19:37.987 00:19:37.987 real 0m17.113s 00:19:37.987 user 0m22.469s 00:19:37.987 sys 0m5.701s 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:37.987 ************************************ 00:19:37.987 END TEST nvmf_fips 00:19:37.987 ************************************ 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.987 ************************************ 00:19:37.987 START TEST nvmf_control_msg_list 00:19:37.987 ************************************ 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:37.987 * Looking for test storage... 00:19:37.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.987 --rc genhtml_branch_coverage=1 00:19:37.987 --rc genhtml_function_coverage=1 00:19:37.987 --rc genhtml_legend=1 00:19:37.987 --rc geninfo_all_blocks=1 00:19:37.987 --rc geninfo_unexecuted_blocks=1 00:19:37.987 00:19:37.987 ' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.987 --rc genhtml_branch_coverage=1 00:19:37.987 --rc genhtml_function_coverage=1 00:19:37.987 --rc genhtml_legend=1 00:19:37.987 --rc geninfo_all_blocks=1 00:19:37.987 --rc geninfo_unexecuted_blocks=1 00:19:37.987 00:19:37.987 ' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.987 --rc genhtml_branch_coverage=1 00:19:37.987 --rc genhtml_function_coverage=1 00:19:37.987 --rc genhtml_legend=1 00:19:37.987 --rc geninfo_all_blocks=1 00:19:37.987 --rc geninfo_unexecuted_blocks=1 00:19:37.987 00:19:37.987 ' 00:19:37.987 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.988 --rc genhtml_branch_coverage=1 00:19:37.988 --rc genhtml_function_coverage=1 00:19:37.988 --rc genhtml_legend=1 00:19:37.988 --rc geninfo_all_blocks=1 00:19:37.988 --rc geninfo_unexecuted_blocks=1 00:19:37.988 00:19:37.988 ' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:37.988 21:00:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.895 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:39.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:39.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:39.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:39.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:39.896 00:19:39.896 --- 10.0.0.2 ping statistics --- 00:19:39.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.896 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:19:39.896 00:19:39.896 --- 10.0.0.1 ping statistics --- 00:19:39.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.896 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.896 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=4004693 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 4004693 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 4004693 ']' 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.897 21:00:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.157 [2024-11-26 21:00:30.839232] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:40.157 [2024-11-26 21:00:30.839293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.157 [2024-11-26 21:00:30.914652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.157 [2024-11-26 21:00:30.975766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.157 [2024-11-26 21:00:30.975842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.157 [2024-11-26 21:00:30.975868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.157 [2024-11-26 21:00:30.975882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.157 [2024-11-26 21:00:30.975894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.157 [2024-11-26 21:00:30.976597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 [2024-11-26 21:00:31.130848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 Malloc0 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.417 [2024-11-26 21:00:31.171902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=4004827 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=4004828 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=4004829 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 4004827 00:19:40.417 21:00:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.417 [2024-11-26 21:00:31.250837] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.417 [2024-11-26 21:00:31.251164] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.417 [2024-11-26 21:00:31.251451] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.868 Initializing NVMe Controllers 00:19:41.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:41.868 Initialization complete. Launching workers. 00:19:41.868 ======================================================== 00:19:41.868 Latency(us) 00:19:41.868 Device Information : IOPS MiB/s Average min max 00:19:41.868 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40877.43 40321.70 40941.44 00:19:41.868 ======================================================== 00:19:41.868 Total : 25.00 0.10 40877.43 40321.70 40941.44 00:19:41.868 00:19:41.868 Initializing NVMe Controllers 00:19:41.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:41.868 Initialization complete. Launching workers. 00:19:41.868 ======================================================== 00:19:41.868 Latency(us) 00:19:41.868 Device Information : IOPS MiB/s Average min max 00:19:41.868 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40890.21 40624.81 41001.99 00:19:41.868 ======================================================== 00:19:41.868 Total : 25.00 0.10 40890.21 40624.81 41001.99 00:19:41.868 00:19:41.868 Initializing NVMe Controllers 00:19:41.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:41.868 Initialization complete. Launching workers. 00:19:41.868 ======================================================== 00:19:41.868 Latency(us) 00:19:41.868 Device Information : IOPS MiB/s Average min max 00:19:41.868 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40900.93 40850.30 40960.53 00:19:41.868 ======================================================== 00:19:41.868 Total : 25.00 0.10 40900.93 40850.30 40960.53 00:19:41.868 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 4004828 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 4004829 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.868 rmmod nvme_tcp 00:19:41.868 rmmod nvme_fabrics 00:19:41.868 rmmod nvme_keyring 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 4004693 ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 4004693 ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4004693' 00:19:41.868 killing process with pid 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 4004693 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.868 21:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.404 00:19:44.404 real 0m6.372s 00:19:44.404 user 0m5.942s 00:19:44.404 sys 0m2.410s 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:44.404 ************************************ 00:19:44.404 END TEST nvmf_control_msg_list 00:19:44.404 ************************************ 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.404 ************************************ 00:19:44.404 START TEST nvmf_wait_for_buf 00:19:44.404 ************************************ 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:44.404 * Looking for test storage... 00:19:44.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.404 21:00:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:44.404 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.405 --rc genhtml_branch_coverage=1 00:19:44.405 --rc genhtml_function_coverage=1 00:19:44.405 --rc genhtml_legend=1 00:19:44.405 --rc geninfo_all_blocks=1 00:19:44.405 --rc geninfo_unexecuted_blocks=1 00:19:44.405 00:19:44.405 ' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.405 --rc genhtml_branch_coverage=1 00:19:44.405 --rc genhtml_function_coverage=1 00:19:44.405 --rc genhtml_legend=1 00:19:44.405 --rc geninfo_all_blocks=1 00:19:44.405 --rc geninfo_unexecuted_blocks=1 00:19:44.405 00:19:44.405 ' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.405 --rc genhtml_branch_coverage=1 00:19:44.405 --rc genhtml_function_coverage=1 00:19:44.405 --rc genhtml_legend=1 00:19:44.405 --rc geninfo_all_blocks=1 00:19:44.405 --rc geninfo_unexecuted_blocks=1 00:19:44.405 00:19:44.405 ' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.405 --rc genhtml_branch_coverage=1 00:19:44.405 --rc genhtml_function_coverage=1 00:19:44.405 --rc genhtml_legend=1 00:19:44.405 --rc geninfo_all_blocks=1 00:19:44.405 --rc geninfo_unexecuted_blocks=1 00:19:44.405 00:19:44.405 ' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.405 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.406 21:00:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:46.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:46.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:46.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.308 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:46.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:46.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:19:46.309 00:19:46.309 --- 10.0.0.2 ping statistics --- 00:19:46.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.309 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:19:46.309 00:19:46.309 --- 10.0.0.1 ping statistics --- 00:19:46.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.309 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=4006907 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 4006907 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 4006907 ']' 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.309 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.586 [2024-11-26 21:00:37.250456] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:19:46.586 [2024-11-26 21:00:37.250541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.586 [2024-11-26 21:00:37.320185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.586 [2024-11-26 21:00:37.376615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.586 [2024-11-26 21:00:37.376680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.586 [2024-11-26 21:00:37.376717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.586 [2024-11-26 21:00:37.376739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.586 [2024-11-26 21:00:37.376749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.586 [2024-11-26 21:00:37.377335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.586 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.586 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:46.586 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.587 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 Malloc0 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 [2024-11-26 21:00:37.625565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 [2024-11-26 21:00:37.649838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 21:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:46.851 [2024-11-26 21:00:37.743816] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:48.228 Initializing NVMe Controllers 00:19:48.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:48.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:48.228 Initialization complete. Launching workers. 00:19:48.228 ======================================================== 00:19:48.228 Latency(us) 00:19:48.228 Device Information : IOPS MiB/s Average min max 00:19:48.228 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 115.00 14.38 36223.93 23980.16 71839.71 00:19:48.228 ======================================================== 00:19:48.228 Total : 115.00 14.38 36223.93 23980.16 71839.71 00:19:48.228 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1814 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1814 -eq 0 ]] 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:48.497 rmmod nvme_tcp 00:19:48.497 rmmod nvme_fabrics 00:19:48.497 rmmod nvme_keyring 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 4006907 ']' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 4006907 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 4006907 ']' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 4006907 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4006907 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4006907' 00:19:48.497 killing process with pid 4006907 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 4006907 00:19:48.497 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 4006907 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.757 21:00:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.670 21:00:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:50.670 00:19:50.670 real 0m6.713s 00:19:50.670 user 0m3.251s 00:19:50.670 sys 0m1.930s 00:19:50.670 21:00:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.670 21:00:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.670 ************************************ 00:19:50.670 END TEST nvmf_wait_for_buf 00:19:50.670 ************************************ 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.970 21:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.877 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.877 ************************************ 00:19:52.877 START TEST nvmf_perf_adq 00:19:52.877 ************************************ 00:19:52.877 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:53.137 * Looking for test storage... 00:19:53.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.137 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.137 --rc genhtml_branch_coverage=1 00:19:53.137 --rc genhtml_function_coverage=1 00:19:53.137 --rc genhtml_legend=1 00:19:53.137 --rc geninfo_all_blocks=1 00:19:53.137 --rc geninfo_unexecuted_blocks=1 00:19:53.137 00:19:53.137 ' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.138 --rc genhtml_branch_coverage=1 00:19:53.138 --rc genhtml_function_coverage=1 00:19:53.138 --rc genhtml_legend=1 00:19:53.138 --rc geninfo_all_blocks=1 00:19:53.138 --rc geninfo_unexecuted_blocks=1 00:19:53.138 00:19:53.138 ' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.138 --rc genhtml_branch_coverage=1 00:19:53.138 --rc genhtml_function_coverage=1 00:19:53.138 --rc genhtml_legend=1 00:19:53.138 --rc geninfo_all_blocks=1 00:19:53.138 --rc geninfo_unexecuted_blocks=1 00:19:53.138 00:19:53.138 ' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.138 --rc genhtml_branch_coverage=1 00:19:53.138 --rc genhtml_function_coverage=1 00:19:53.138 --rc genhtml_legend=1 00:19:53.138 --rc geninfo_all_blocks=1 00:19:53.138 --rc geninfo_unexecuted_blocks=1 00:19:53.138 00:19:53.138 ' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.138 21:00:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.675 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:55.676 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:55.676 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:55.676 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:55.676 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:55.676 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:55.936 21:00:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:58.474 21:00:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:03.757 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:03.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:03.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:03.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:03.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:03.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:20:03.758 00:20:03.758 --- 10.0.0.2 ping statistics --- 00:20:03.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.758 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:20:03.758 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:20:03.758 00:20:03.759 --- 10.0.0.1 ping statistics --- 00:20:03.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.759 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4011755 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4011755 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4011755 ']' 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 [2024-11-26 21:00:54.277229] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:03.759 [2024-11-26 21:00:54.277302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.759 [2024-11-26 21:00:54.351942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.759 [2024-11-26 21:00:54.410183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.759 [2024-11-26 21:00:54.410251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.759 [2024-11-26 21:00:54.410274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.759 [2024-11-26 21:00:54.410285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.759 [2024-11-26 21:00:54.410295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.759 [2024-11-26 21:00:54.411907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.759 [2024-11-26 21:00:54.412002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.759 [2024-11-26 21:00:54.411933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.759 [2024-11-26 21:00:54.412006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 [2024-11-26 21:00:54.658208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.759 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.018 Malloc1 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.018 [2024-11-26 21:00:54.721399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4011788 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:04.018 21:00:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:05.922 "tick_rate": 2700000000, 00:20:05.922 "poll_groups": [ 00:20:05.922 { 00:20:05.922 "name": "nvmf_tgt_poll_group_000", 00:20:05.922 "admin_qpairs": 1, 00:20:05.922 "io_qpairs": 1, 00:20:05.922 "current_admin_qpairs": 1, 00:20:05.922 "current_io_qpairs": 1, 00:20:05.922 "pending_bdev_io": 0, 00:20:05.922 "completed_nvme_io": 20195, 00:20:05.922 "transports": [ 00:20:05.922 { 00:20:05.922 "trtype": "TCP" 00:20:05.922 } 00:20:05.922 ] 00:20:05.922 }, 00:20:05.922 { 00:20:05.922 "name": "nvmf_tgt_poll_group_001", 00:20:05.922 "admin_qpairs": 0, 00:20:05.922 "io_qpairs": 1, 00:20:05.922 "current_admin_qpairs": 0, 00:20:05.922 "current_io_qpairs": 1, 00:20:05.922 "pending_bdev_io": 0, 00:20:05.922 "completed_nvme_io": 19386, 00:20:05.922 "transports": [ 00:20:05.922 { 00:20:05.922 "trtype": "TCP" 00:20:05.922 } 00:20:05.922 ] 00:20:05.922 }, 00:20:05.922 { 00:20:05.922 "name": "nvmf_tgt_poll_group_002", 00:20:05.922 "admin_qpairs": 0, 00:20:05.922 "io_qpairs": 1, 00:20:05.922 "current_admin_qpairs": 0, 00:20:05.922 "current_io_qpairs": 1, 00:20:05.922 "pending_bdev_io": 0, 00:20:05.922 "completed_nvme_io": 17921, 00:20:05.922 "transports": [ 00:20:05.922 { 00:20:05.922 "trtype": "TCP" 00:20:05.922 } 00:20:05.922 ] 00:20:05.922 }, 00:20:05.922 { 00:20:05.922 "name": "nvmf_tgt_poll_group_003", 00:20:05.922 "admin_qpairs": 0, 00:20:05.922 "io_qpairs": 1, 00:20:05.922 "current_admin_qpairs": 0, 00:20:05.922 "current_io_qpairs": 1, 00:20:05.922 "pending_bdev_io": 0, 00:20:05.922 "completed_nvme_io": 19993, 00:20:05.922 "transports": [ 00:20:05.922 { 00:20:05.922 "trtype": "TCP" 00:20:05.922 } 00:20:05.922 ] 00:20:05.922 } 00:20:05.922 ] 00:20:05.922 }' 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:05.922 21:00:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4011788 00:20:14.036 Initializing NVMe Controllers 00:20:14.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:14.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:14.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:14.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:14.036 Initialization complete. Launching workers. 00:20:14.036 ======================================================== 00:20:14.036 Latency(us) 00:20:14.036 Device Information : IOPS MiB/s Average min max 00:20:14.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10496.90 41.00 6098.69 2552.49 9057.86 00:20:14.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10258.40 40.07 6240.60 2453.56 9871.84 00:20:14.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9479.50 37.03 6753.60 2706.64 11193.11 00:20:14.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10647.80 41.59 6010.48 2617.58 8775.13 00:20:14.036 ======================================================== 00:20:14.036 Total : 40882.60 159.70 6263.18 2453.56 11193.11 00:20:14.036 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.036 rmmod nvme_tcp 00:20:14.036 rmmod nvme_fabrics 00:20:14.036 rmmod nvme_keyring 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4011755 ']' 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4011755 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4011755 ']' 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4011755 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.036 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011755 00:20:14.294 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.294 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.294 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011755' 00:20:14.294 killing process with pid 4011755 00:20:14.294 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4011755 00:20:14.294 21:01:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4011755 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.553 21:01:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.455 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.455 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:16.455 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:16.455 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:17.390 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:20.033 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.310 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:25.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:25.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:25.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:25.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:20:25.311 00:20:25.311 --- 10.0.0.2 ping statistics --- 00:20:25.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.311 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:25.311 00:20:25.311 --- 10.0.0.1 ping statistics --- 00:20:25.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.311 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:25.311 net.core.busy_poll = 1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:25.311 net.core.busy_read = 1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4014524 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:25.311 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4014524 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4014524 ']' 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.312 21:01:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 [2024-11-26 21:01:15.833392] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:25.312 [2024-11-26 21:01:15.833474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.312 [2024-11-26 21:01:15.910835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.312 [2024-11-26 21:01:15.973601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.312 [2024-11-26 21:01:15.973678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.312 [2024-11-26 21:01:15.973703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.312 [2024-11-26 21:01:15.973717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.312 [2024-11-26 21:01:15.973729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.312 [2024-11-26 21:01:15.975390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.312 [2024-11-26 21:01:15.975459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.312 [2024-11-26 21:01:15.975549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.312 [2024-11-26 21:01:15.975552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.312 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.312 [2024-11-26 21:01:16.238875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.570 Malloc1 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.570 [2024-11-26 21:01:16.301878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=4014560 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:25.570 21:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:27.472 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:27.472 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.472 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.472 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.472 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:27.472 "tick_rate": 2700000000, 00:20:27.472 "poll_groups": [ 00:20:27.472 { 00:20:27.472 "name": "nvmf_tgt_poll_group_000", 00:20:27.472 "admin_qpairs": 1, 00:20:27.472 "io_qpairs": 3, 00:20:27.472 "current_admin_qpairs": 1, 00:20:27.472 "current_io_qpairs": 3, 00:20:27.472 "pending_bdev_io": 0, 00:20:27.472 "completed_nvme_io": 26559, 00:20:27.472 "transports": [ 00:20:27.472 { 00:20:27.472 "trtype": "TCP" 00:20:27.472 } 00:20:27.472 ] 00:20:27.472 }, 00:20:27.472 { 00:20:27.472 "name": "nvmf_tgt_poll_group_001", 00:20:27.472 "admin_qpairs": 0, 00:20:27.472 "io_qpairs": 1, 00:20:27.472 "current_admin_qpairs": 0, 00:20:27.472 "current_io_qpairs": 1, 00:20:27.472 "pending_bdev_io": 0, 00:20:27.472 "completed_nvme_io": 24271, 00:20:27.472 "transports": [ 00:20:27.472 { 00:20:27.472 "trtype": "TCP" 00:20:27.472 } 00:20:27.472 ] 00:20:27.472 }, 00:20:27.472 { 00:20:27.472 "name": "nvmf_tgt_poll_group_002", 00:20:27.472 "admin_qpairs": 0, 00:20:27.472 "io_qpairs": 0, 00:20:27.472 "current_admin_qpairs": 0, 00:20:27.472 "current_io_qpairs": 0, 00:20:27.472 "pending_bdev_io": 0, 00:20:27.472 "completed_nvme_io": 0, 00:20:27.472 "transports": [ 00:20:27.473 { 00:20:27.473 "trtype": "TCP" 00:20:27.473 } 00:20:27.473 ] 00:20:27.473 }, 00:20:27.473 { 00:20:27.473 "name": "nvmf_tgt_poll_group_003", 00:20:27.473 "admin_qpairs": 0, 00:20:27.473 "io_qpairs": 0, 00:20:27.473 "current_admin_qpairs": 0, 00:20:27.473 "current_io_qpairs": 0, 00:20:27.473 "pending_bdev_io": 0, 00:20:27.473 "completed_nvme_io": 0, 00:20:27.473 "transports": [ 00:20:27.473 { 00:20:27.473 "trtype": "TCP" 00:20:27.473 } 00:20:27.473 ] 00:20:27.473 } 00:20:27.473 ] 00:20:27.473 }' 00:20:27.473 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:27.473 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:27.473 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:27.473 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:27.473 21:01:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 4014560 00:20:35.584 Initializing NVMe Controllers 00:20:35.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:35.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:35.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:35.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:35.585 Initialization complete. Launching workers. 00:20:35.585 ======================================================== 00:20:35.585 Latency(us) 00:20:35.585 Device Information : IOPS MiB/s Average min max 00:20:35.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4620.90 18.05 13879.48 1842.28 62618.96 00:20:35.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12822.10 50.09 4991.32 1780.05 7506.08 00:20:35.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4454.60 17.40 14372.35 1770.45 62717.84 00:20:35.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4962.00 19.38 12901.83 2047.83 61006.88 00:20:35.585 ======================================================== 00:20:35.585 Total : 26859.60 104.92 9537.63 1770.45 62717.84 00:20:35.585 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.585 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.585 rmmod nvme_tcp 00:20:35.585 rmmod nvme_fabrics 00:20:35.842 rmmod nvme_keyring 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4014524 ']' 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4014524 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4014524 ']' 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4014524 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4014524 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4014524' 00:20:35.842 killing process with pid 4014524 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4014524 00:20:35.842 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4014524 00:20:36.099 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.099 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.099 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.099 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:36.099 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.100 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.387 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:39.388 00:20:39.388 real 0m46.105s 00:20:39.388 user 2m37.728s 00:20:39.388 sys 0m10.610s 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 ************************************ 00:20:39.388 END TEST nvmf_perf_adq 00:20:39.388 ************************************ 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.388 ************************************ 00:20:39.388 START TEST nvmf_shutdown 00:20:39.388 ************************************ 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:39.388 * Looking for test storage... 00:20:39.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.388 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.388 --rc genhtml_branch_coverage=1 00:20:39.388 --rc genhtml_function_coverage=1 00:20:39.388 --rc genhtml_legend=1 00:20:39.388 --rc geninfo_all_blocks=1 00:20:39.388 --rc geninfo_unexecuted_blocks=1 00:20:39.388 00:20:39.388 ' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.388 --rc genhtml_branch_coverage=1 00:20:39.388 --rc genhtml_function_coverage=1 00:20:39.388 --rc genhtml_legend=1 00:20:39.388 --rc geninfo_all_blocks=1 00:20:39.388 --rc geninfo_unexecuted_blocks=1 00:20:39.388 00:20:39.388 ' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.388 --rc genhtml_branch_coverage=1 00:20:39.388 --rc genhtml_function_coverage=1 00:20:39.388 --rc genhtml_legend=1 00:20:39.388 --rc geninfo_all_blocks=1 00:20:39.388 --rc geninfo_unexecuted_blocks=1 00:20:39.388 00:20:39.388 ' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.388 --rc genhtml_branch_coverage=1 00:20:39.388 --rc genhtml_function_coverage=1 00:20:39.388 --rc genhtml_legend=1 00:20:39.388 --rc geninfo_all_blocks=1 00:20:39.388 --rc geninfo_unexecuted_blocks=1 00:20:39.388 00:20:39.388 ' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.388 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:39.389 ************************************ 00:20:39.389 START TEST nvmf_shutdown_tc1 00:20:39.389 ************************************ 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.389 21:01:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:41.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:41.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:41.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:41.296 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:41.296 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:41.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:41.555 00:20:41.555 --- 10.0.0.2 ping statistics --- 00:20:41.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.555 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:20:41.555 00:20:41.555 --- 10.0.0.1 ping statistics --- 00:20:41.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.555 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=4017855 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 4017855 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4017855 ']' 00:20:41.555 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.556 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.556 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.556 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.556 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.556 [2024-11-26 21:01:32.344071] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:41.556 [2024-11-26 21:01:32.344142] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.556 [2024-11-26 21:01:32.424511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.556 [2024-11-26 21:01:32.490681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.556 [2024-11-26 21:01:32.490770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.556 [2024-11-26 21:01:32.490790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.556 [2024-11-26 21:01:32.490803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.556 [2024-11-26 21:01:32.490813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.556 [2024-11-26 21:01:32.492621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.556 [2024-11-26 21:01:32.492676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.556 [2024-11-26 21:01:32.492712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:41.556 [2024-11-26 21:01:32.492715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.814 [2024-11-26 21:01:32.651711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:41.814 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.815 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:41.815 Malloc1 00:20:41.815 [2024-11-26 21:01:32.744353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.073 Malloc2 00:20:42.073 Malloc3 00:20:42.073 Malloc4 00:20:42.073 Malloc5 00:20:42.073 Malloc6 00:20:42.332 Malloc7 00:20:42.332 Malloc8 00:20:42.332 Malloc9 00:20:42.332 Malloc10 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4018035 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4018035 /var/tmp/bdevperf.sock 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4018035 ']' 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.332 { 00:20:42.332 "params": { 00:20:42.332 "name": "Nvme$subsystem", 00:20:42.332 "trtype": "$TEST_TRANSPORT", 00:20:42.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.332 "adrfam": "ipv4", 00:20:42.332 "trsvcid": "$NVMF_PORT", 00:20:42.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.332 "hdgst": ${hdgst:-false}, 00:20:42.332 "ddgst": ${ddgst:-false} 00:20:42.332 }, 00:20:42.332 "method": "bdev_nvme_attach_controller" 00:20:42.332 } 00:20:42.332 EOF 00:20:42.332 )") 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.332 { 00:20:42.332 "params": { 00:20:42.332 "name": "Nvme$subsystem", 00:20:42.332 "trtype": "$TEST_TRANSPORT", 00:20:42.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.332 "adrfam": "ipv4", 00:20:42.332 "trsvcid": "$NVMF_PORT", 00:20:42.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.332 "hdgst": ${hdgst:-false}, 00:20:42.332 "ddgst": ${ddgst:-false} 00:20:42.332 }, 00:20:42.332 "method": "bdev_nvme_attach_controller" 00:20:42.332 } 00:20:42.332 EOF 00:20:42.332 )") 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.332 { 00:20:42.332 "params": { 00:20:42.332 "name": "Nvme$subsystem", 00:20:42.332 "trtype": "$TEST_TRANSPORT", 00:20:42.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.332 "adrfam": "ipv4", 00:20:42.332 "trsvcid": "$NVMF_PORT", 00:20:42.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.332 "hdgst": ${hdgst:-false}, 00:20:42.332 "ddgst": ${ddgst:-false} 00:20:42.332 }, 00:20:42.332 "method": "bdev_nvme_attach_controller" 00:20:42.332 } 00:20:42.332 EOF 00:20:42.332 )") 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.332 { 00:20:42.332 "params": { 00:20:42.332 "name": "Nvme$subsystem", 00:20:42.332 "trtype": "$TEST_TRANSPORT", 00:20:42.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.332 "adrfam": "ipv4", 00:20:42.332 "trsvcid": "$NVMF_PORT", 00:20:42.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.332 "hdgst": ${hdgst:-false}, 00:20:42.332 "ddgst": ${ddgst:-false} 00:20:42.332 }, 00:20:42.332 "method": "bdev_nvme_attach_controller" 00:20:42.332 } 00:20:42.332 EOF 00:20:42.332 )") 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.332 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.332 { 00:20:42.332 "params": { 00:20:42.332 "name": "Nvme$subsystem", 00:20:42.332 "trtype": "$TEST_TRANSPORT", 00:20:42.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.332 "adrfam": "ipv4", 00:20:42.332 "trsvcid": "$NVMF_PORT", 00:20:42.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.332 "hdgst": ${hdgst:-false}, 00:20:42.332 "ddgst": ${ddgst:-false} 00:20:42.332 }, 00:20:42.332 "method": "bdev_nvme_attach_controller" 00:20:42.332 } 00:20:42.332 EOF 00:20:42.332 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.333 { 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme$subsystem", 00:20:42.333 "trtype": "$TEST_TRANSPORT", 00:20:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "$NVMF_PORT", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.333 "hdgst": ${hdgst:-false}, 00:20:42.333 "ddgst": ${ddgst:-false} 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 } 00:20:42.333 EOF 00:20:42.333 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.333 { 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme$subsystem", 00:20:42.333 "trtype": "$TEST_TRANSPORT", 00:20:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "$NVMF_PORT", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.333 "hdgst": ${hdgst:-false}, 00:20:42.333 "ddgst": ${ddgst:-false} 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 } 00:20:42.333 EOF 00:20:42.333 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.333 { 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme$subsystem", 00:20:42.333 "trtype": "$TEST_TRANSPORT", 00:20:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "$NVMF_PORT", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.333 "hdgst": ${hdgst:-false}, 00:20:42.333 "ddgst": ${ddgst:-false} 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 } 00:20:42.333 EOF 00:20:42.333 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.333 { 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme$subsystem", 00:20:42.333 "trtype": "$TEST_TRANSPORT", 00:20:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "$NVMF_PORT", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.333 "hdgst": ${hdgst:-false}, 00:20:42.333 "ddgst": ${ddgst:-false} 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 } 00:20:42.333 EOF 00:20:42.333 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.333 { 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme$subsystem", 00:20:42.333 "trtype": "$TEST_TRANSPORT", 00:20:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "$NVMF_PORT", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.333 "hdgst": ${hdgst:-false}, 00:20:42.333 "ddgst": ${ddgst:-false} 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 } 00:20:42.333 EOF 00:20:42.333 )") 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:42.333 21:01:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme1", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme2", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme3", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme4", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme5", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme6", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme7", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme8", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme9", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 },{ 00:20:42.333 "params": { 00:20:42.333 "name": "Nvme10", 00:20:42.333 "trtype": "tcp", 00:20:42.333 "traddr": "10.0.0.2", 00:20:42.333 "adrfam": "ipv4", 00:20:42.333 "trsvcid": "4420", 00:20:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:42.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:42.333 "hdgst": false, 00:20:42.333 "ddgst": false 00:20:42.333 }, 00:20:42.333 "method": "bdev_nvme_attach_controller" 00:20:42.333 }' 00:20:42.333 [2024-11-26 21:01:33.263237] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:42.334 [2024-11-26 21:01:33.263309] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:42.591 [2024-11-26 21:01:33.335779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.591 [2024-11-26 21:01:33.395214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4018035 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:44.491 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:45.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4018035 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4017855 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.424 { 00:20:45.424 "params": { 00:20:45.424 "name": "Nvme$subsystem", 00:20:45.424 "trtype": "$TEST_TRANSPORT", 00:20:45.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.424 "adrfam": "ipv4", 00:20:45.424 "trsvcid": "$NVMF_PORT", 00:20:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.424 "hdgst": ${hdgst:-false}, 00:20:45.424 "ddgst": ${ddgst:-false} 00:20:45.424 }, 00:20:45.424 "method": "bdev_nvme_attach_controller" 00:20:45.424 } 00:20:45.424 EOF 00:20:45.424 )") 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.424 { 00:20:45.424 "params": { 00:20:45.424 "name": "Nvme$subsystem", 00:20:45.424 "trtype": "$TEST_TRANSPORT", 00:20:45.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.424 "adrfam": "ipv4", 00:20:45.424 "trsvcid": "$NVMF_PORT", 00:20:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.424 "hdgst": ${hdgst:-false}, 00:20:45.424 "ddgst": ${ddgst:-false} 00:20:45.424 }, 00:20:45.424 "method": "bdev_nvme_attach_controller" 00:20:45.424 } 00:20:45.424 EOF 00:20:45.424 )") 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.424 { 00:20:45.424 "params": { 00:20:45.424 "name": "Nvme$subsystem", 00:20:45.424 "trtype": "$TEST_TRANSPORT", 00:20:45.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.424 "adrfam": "ipv4", 00:20:45.424 "trsvcid": "$NVMF_PORT", 00:20:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.424 "hdgst": ${hdgst:-false}, 00:20:45.424 "ddgst": ${ddgst:-false} 00:20:45.424 }, 00:20:45.424 "method": "bdev_nvme_attach_controller" 00:20:45.424 } 00:20:45.424 EOF 00:20:45.424 )") 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.424 { 00:20:45.424 "params": { 00:20:45.424 "name": "Nvme$subsystem", 00:20:45.424 "trtype": "$TEST_TRANSPORT", 00:20:45.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.424 "adrfam": "ipv4", 00:20:45.424 "trsvcid": "$NVMF_PORT", 00:20:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.424 "hdgst": ${hdgst:-false}, 00:20:45.424 "ddgst": ${ddgst:-false} 00:20:45.424 }, 00:20:45.424 "method": "bdev_nvme_attach_controller" 00:20:45.424 } 00:20:45.424 EOF 00:20:45.424 )") 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.424 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.424 { 00:20:45.424 "params": { 00:20:45.424 "name": "Nvme$subsystem", 00:20:45.424 "trtype": "$TEST_TRANSPORT", 00:20:45.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.424 "adrfam": "ipv4", 00:20:45.424 "trsvcid": "$NVMF_PORT", 00:20:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.424 "hdgst": ${hdgst:-false}, 00:20:45.424 "ddgst": ${ddgst:-false} 00:20:45.424 }, 00:20:45.424 "method": "bdev_nvme_attach_controller" 00:20:45.424 } 00:20:45.424 EOF 00:20:45.424 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.425 { 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme$subsystem", 00:20:45.425 "trtype": "$TEST_TRANSPORT", 00:20:45.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "$NVMF_PORT", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.425 "hdgst": ${hdgst:-false}, 00:20:45.425 "ddgst": ${ddgst:-false} 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 } 00:20:45.425 EOF 00:20:45.425 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.425 { 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme$subsystem", 00:20:45.425 "trtype": "$TEST_TRANSPORT", 00:20:45.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "$NVMF_PORT", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.425 "hdgst": ${hdgst:-false}, 00:20:45.425 "ddgst": ${ddgst:-false} 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 } 00:20:45.425 EOF 00:20:45.425 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.425 { 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme$subsystem", 00:20:45.425 "trtype": "$TEST_TRANSPORT", 00:20:45.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "$NVMF_PORT", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.425 "hdgst": ${hdgst:-false}, 00:20:45.425 "ddgst": ${ddgst:-false} 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 } 00:20:45.425 EOF 00:20:45.425 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.425 { 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme$subsystem", 00:20:45.425 "trtype": "$TEST_TRANSPORT", 00:20:45.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "$NVMF_PORT", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.425 "hdgst": ${hdgst:-false}, 00:20:45.425 "ddgst": ${ddgst:-false} 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 } 00:20:45.425 EOF 00:20:45.425 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.425 { 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme$subsystem", 00:20:45.425 "trtype": "$TEST_TRANSPORT", 00:20:45.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "$NVMF_PORT", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.425 "hdgst": ${hdgst:-false}, 00:20:45.425 "ddgst": ${ddgst:-false} 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 } 00:20:45.425 EOF 00:20:45.425 )") 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:45.425 21:01:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme1", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme2", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme3", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme4", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme5", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme6", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.425 "trsvcid": "4420", 00:20:45.425 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.425 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.425 "hdgst": false, 00:20:45.425 "ddgst": false 00:20:45.425 }, 00:20:45.425 "method": "bdev_nvme_attach_controller" 00:20:45.425 },{ 00:20:45.425 "params": { 00:20:45.425 "name": "Nvme7", 00:20:45.425 "trtype": "tcp", 00:20:45.425 "traddr": "10.0.0.2", 00:20:45.425 "adrfam": "ipv4", 00:20:45.426 "trsvcid": "4420", 00:20:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.426 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.426 "hdgst": false, 00:20:45.426 "ddgst": false 00:20:45.426 }, 00:20:45.426 "method": "bdev_nvme_attach_controller" 00:20:45.426 },{ 00:20:45.426 "params": { 00:20:45.426 "name": "Nvme8", 00:20:45.426 "trtype": "tcp", 00:20:45.426 "traddr": "10.0.0.2", 00:20:45.426 "adrfam": "ipv4", 00:20:45.426 "trsvcid": "4420", 00:20:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.426 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.426 "hdgst": false, 00:20:45.426 "ddgst": false 00:20:45.426 }, 00:20:45.426 "method": "bdev_nvme_attach_controller" 00:20:45.426 },{ 00:20:45.426 "params": { 00:20:45.426 "name": "Nvme9", 00:20:45.426 "trtype": "tcp", 00:20:45.426 "traddr": "10.0.0.2", 00:20:45.426 "adrfam": "ipv4", 00:20:45.426 "trsvcid": "4420", 00:20:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.426 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.426 "hdgst": false, 00:20:45.426 "ddgst": false 00:20:45.426 }, 00:20:45.426 "method": "bdev_nvme_attach_controller" 00:20:45.426 },{ 00:20:45.426 "params": { 00:20:45.426 "name": "Nvme10", 00:20:45.426 "trtype": "tcp", 00:20:45.426 "traddr": "10.0.0.2", 00:20:45.426 "adrfam": "ipv4", 00:20:45.426 "trsvcid": "4420", 00:20:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.426 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.426 "hdgst": false, 00:20:45.426 "ddgst": false 00:20:45.426 }, 00:20:45.426 "method": "bdev_nvme_attach_controller" 00:20:45.426 }' 00:20:45.426 [2024-11-26 21:01:36.343951] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:45.426 [2024-11-26 21:01:36.344050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018453 ] 00:20:45.684 [2024-11-26 21:01:36.417231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.684 [2024-11-26 21:01:36.478383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.056 Running I/O for 1 seconds... 00:20:48.249 1824.00 IOPS, 114.00 MiB/s 00:20:48.249 Latency(us) 00:20:48.249 [2024-11-26T20:01:39.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme1n1 : 1.10 238.23 14.89 0.00 0.00 264176.67 6213.78 245444.46 00:20:48.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme2n1 : 1.10 236.31 14.77 0.00 0.00 261479.68 6262.33 250104.79 00:20:48.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme3n1 : 1.11 231.27 14.45 0.00 0.00 264951.85 19029.71 259425.47 00:20:48.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme4n1 : 1.09 242.73 15.17 0.00 0.00 245378.42 7378.87 250104.79 00:20:48.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme5n1 : 1.11 229.68 14.35 0.00 0.00 257773.23 21845.33 259425.47 00:20:48.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme6n1 : 1.12 228.49 14.28 0.00 0.00 254615.70 22427.88 254765.13 00:20:48.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme7n1 : 1.12 227.72 14.23 0.00 0.00 251175.82 21942.42 251658.24 00:20:48.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme8n1 : 1.18 270.36 16.90 0.00 0.00 209236.42 8446.86 256318.58 00:20:48.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme9n1 : 1.17 228.61 14.29 0.00 0.00 238053.27 4733.16 265639.25 00:20:48.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.249 Verification LBA range: start 0x0 length 0x400 00:20:48.249 Nvme10n1 : 1.20 267.13 16.70 0.00 0.00 204748.99 5922.51 278066.82 00:20:48.249 [2024-11-26T20:01:39.187Z] =================================================================================================================== 00:20:48.249 [2024-11-26T20:01:39.187Z] Total : 2400.53 150.03 0.00 0.00 243389.98 4733.16 278066.82 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.507 rmmod nvme_tcp 00:20:48.507 rmmod nvme_fabrics 00:20:48.507 rmmod nvme_keyring 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 4017855 ']' 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 4017855 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 4017855 ']' 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 4017855 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4017855 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4017855' 00:20:48.507 killing process with pid 4017855 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 4017855 00:20:48.507 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 4017855 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.073 21:01:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.612 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.612 00:20:51.612 real 0m11.864s 00:20:51.612 user 0m34.164s 00:20:51.612 sys 0m3.432s 00:20:51.612 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.612 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.612 ************************************ 00:20:51.612 END TEST nvmf_shutdown_tc1 00:20:51.612 ************************************ 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.612 ************************************ 00:20:51.612 START TEST nvmf_shutdown_tc2 00:20:51.612 ************************************ 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.612 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:51.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:51.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:51.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:51.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:51.613 00:20:51.613 --- 10.0.0.2 ping statistics --- 00:20:51.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.613 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:51.613 00:20:51.613 --- 10.0.0.1 ping statistics --- 00:20:51.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.613 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4019221 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4019221 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4019221 ']' 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.613 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.614 [2024-11-26 21:01:42.271847] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:51.614 [2024-11-26 21:01:42.271937] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.614 [2024-11-26 21:01:42.349791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.614 [2024-11-26 21:01:42.412563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.614 [2024-11-26 21:01:42.412632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.614 [2024-11-26 21:01:42.412648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.614 [2024-11-26 21:01:42.412662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.614 [2024-11-26 21:01:42.412674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.614 [2024-11-26 21:01:42.414352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.614 [2024-11-26 21:01:42.414469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.614 [2024-11-26 21:01:42.414534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.614 [2024-11-26 21:01:42.414536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.614 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.872 [2024-11-26 21:01:42.565783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.872 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.872 Malloc1 00:20:51.872 [2024-11-26 21:01:42.659072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.872 Malloc2 00:20:51.872 Malloc3 00:20:51.872 Malloc4 00:20:52.131 Malloc5 00:20:52.131 Malloc6 00:20:52.131 Malloc7 00:20:52.131 Malloc8 00:20:52.131 Malloc9 00:20:52.394 Malloc10 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4019399 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4019399 /var/tmp/bdevperf.sock 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4019399 ']' 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.394 { 00:20:52.394 "params": { 00:20:52.394 "name": "Nvme$subsystem", 00:20:52.394 "trtype": "$TEST_TRANSPORT", 00:20:52.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.394 "adrfam": "ipv4", 00:20:52.394 "trsvcid": "$NVMF_PORT", 00:20:52.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.394 "hdgst": ${hdgst:-false}, 00:20:52.394 "ddgst": ${ddgst:-false} 00:20:52.394 }, 00:20:52.394 "method": "bdev_nvme_attach_controller" 00:20:52.394 } 00:20:52.394 EOF 00:20:52.394 )") 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.394 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.394 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.395 { 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme$subsystem", 00:20:52.395 "trtype": "$TEST_TRANSPORT", 00:20:52.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "$NVMF_PORT", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.395 "hdgst": ${hdgst:-false}, 00:20:52.395 "ddgst": ${ddgst:-false} 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 } 00:20:52.395 EOF 00:20:52.395 )") 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:52.395 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme1", 00:20:52.395 "trtype": "tcp", 00:20:52.395 "traddr": "10.0.0.2", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "4420", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.395 "hdgst": false, 00:20:52.395 "ddgst": false 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 },{ 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme2", 00:20:52.395 "trtype": "tcp", 00:20:52.395 "traddr": "10.0.0.2", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "4420", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:52.395 "hdgst": false, 00:20:52.395 "ddgst": false 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 },{ 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme3", 00:20:52.395 "trtype": "tcp", 00:20:52.395 "traddr": "10.0.0.2", 00:20:52.395 "adrfam": "ipv4", 00:20:52.395 "trsvcid": "4420", 00:20:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:52.395 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:52.395 "hdgst": false, 00:20:52.395 "ddgst": false 00:20:52.395 }, 00:20:52.395 "method": "bdev_nvme_attach_controller" 00:20:52.395 },{ 00:20:52.395 "params": { 00:20:52.395 "name": "Nvme4", 00:20:52.395 "trtype": "tcp", 00:20:52.395 "traddr": "10.0.0.2", 00:20:52.395 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme5", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme6", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme7", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme8", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme9", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 },{ 00:20:52.396 "params": { 00:20:52.396 "name": "Nvme10", 00:20:52.396 "trtype": "tcp", 00:20:52.396 "traddr": "10.0.0.2", 00:20:52.396 "adrfam": "ipv4", 00:20:52.396 "trsvcid": "4420", 00:20:52.396 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:52.396 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:52.396 "hdgst": false, 00:20:52.396 "ddgst": false 00:20:52.396 }, 00:20:52.396 "method": "bdev_nvme_attach_controller" 00:20:52.396 }' 00:20:52.396 [2024-11-26 21:01:43.180305] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:52.396 [2024-11-26 21:01:43.180378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019399 ] 00:20:52.396 [2024-11-26 21:01:43.251692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.396 [2024-11-26 21:01:43.311224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.294 Running I/O for 10 seconds... 00:20:54.294 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.294 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:54.294 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:54.294 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.294 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:54.553 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:54.811 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:54.811 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:54.811 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.811 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.811 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4019399 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4019399 ']' 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4019399 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4019399 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4019399' 00:20:54.812 killing process with pid 4019399 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4019399 00:20:54.812 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4019399 00:20:54.812 Received shutdown signal, test time was about 0.835734 seconds 00:20:54.812 00:20:54.812 Latency(us) 00:20:54.812 [2024-11-26T20:01:45.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.812 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme1n1 : 0.83 232.38 14.52 0.00 0.00 271849.24 24466.77 267192.70 00:20:54.812 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme2n1 : 0.82 234.87 14.68 0.00 0.00 262827.17 22427.88 256318.58 00:20:54.812 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme3n1 : 0.79 243.29 15.21 0.00 0.00 247402.45 17670.45 256318.58 00:20:54.812 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme4n1 : 0.81 237.30 14.83 0.00 0.00 247862.61 17864.63 243891.01 00:20:54.812 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme5n1 : 0.83 229.97 14.37 0.00 0.00 249671.93 24175.50 271853.04 00:20:54.812 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme6n1 : 0.83 232.11 14.51 0.00 0.00 240968.69 29903.83 256318.58 00:20:54.812 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme7n1 : 0.80 239.90 14.99 0.00 0.00 226465.06 20388.98 222142.77 00:20:54.812 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme8n1 : 0.80 253.02 15.81 0.00 0.00 204666.14 10777.03 233016.89 00:20:54.812 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme9n1 : 0.83 230.25 14.39 0.00 0.00 226335.86 21165.70 260978.92 00:20:54.812 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.812 Verification LBA range: start 0x0 length 0x400 00:20:54.812 Nvme10n1 : 0.79 167.85 10.49 0.00 0.00 292840.43 8155.59 299815.06 00:20:54.812 [2024-11-26T20:01:45.750Z] =================================================================================================================== 00:20:54.812 [2024-11-26T20:01:45.750Z] Total : 2300.94 143.81 0.00 0.00 245397.73 8155.59 299815.06 00:20:55.070 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4019221 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.003 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.003 rmmod nvme_tcp 00:20:56.262 rmmod nvme_fabrics 00:20:56.262 rmmod nvme_keyring 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 4019221 ']' 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 4019221 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4019221 ']' 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4019221 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.262 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4019221 00:20:56.262 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.262 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.262 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4019221' 00:20:56.262 killing process with pid 4019221 00:20:56.262 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4019221 00:20:56.262 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4019221 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.832 21:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.742 00:20:58.742 real 0m7.502s 00:20:58.742 user 0m22.408s 00:20:58.742 sys 0m1.463s 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.742 ************************************ 00:20:58.742 END TEST nvmf_shutdown_tc2 00:20:58.742 ************************************ 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:58.742 ************************************ 00:20:58.742 START TEST nvmf_shutdown_tc3 00:20:58.742 ************************************ 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.742 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:58.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:58.743 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:58.743 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:58.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.743 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:20:59.003 00:20:59.003 --- 10.0.0.2 ping statistics --- 00:20:59.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.003 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:20:59.003 00:20:59.003 --- 10.0.0.1 ping statistics --- 00:20:59.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.003 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=4020194 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 4020194 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4020194 ']' 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.003 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.003 [2024-11-26 21:01:49.847099] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:20:59.003 [2024-11-26 21:01:49.847180] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.003 [2024-11-26 21:01:49.928285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.262 [2024-11-26 21:01:49.992607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.262 [2024-11-26 21:01:49.992694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.262 [2024-11-26 21:01:49.992715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.262 [2024-11-26 21:01:49.992728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.262 [2024-11-26 21:01:49.992740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.262 [2024-11-26 21:01:49.994464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.262 [2024-11-26 21:01:49.994562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.262 [2024-11-26 21:01:49.994629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.262 [2024-11-26 21:01:49.994632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.262 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.262 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:59.262 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.262 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.262 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-26 21:01:50.143442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.263 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.522 Malloc1 00:20:59.522 [2024-11-26 21:01:50.237107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.522 Malloc2 00:20:59.522 Malloc3 00:20:59.522 Malloc4 00:20:59.522 Malloc5 00:20:59.522 Malloc6 00:20:59.781 Malloc7 00:20:59.781 Malloc8 00:20:59.781 Malloc9 00:20:59.781 Malloc10 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4020374 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4020374 /var/tmp/bdevperf.sock 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4020374 ']' 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.781 { 00:20:59.781 "params": { 00:20:59.781 "name": "Nvme$subsystem", 00:20:59.781 "trtype": "$TEST_TRANSPORT", 00:20:59.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.781 "adrfam": "ipv4", 00:20:59.781 "trsvcid": "$NVMF_PORT", 00:20:59.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.781 "hdgst": ${hdgst:-false}, 00:20:59.781 "ddgst": ${ddgst:-false} 00:20:59.781 }, 00:20:59.781 "method": "bdev_nvme_attach_controller" 00:20:59.781 } 00:20:59.781 EOF 00:20:59.781 )") 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.781 { 00:20:59.781 "params": { 00:20:59.781 "name": "Nvme$subsystem", 00:20:59.781 "trtype": "$TEST_TRANSPORT", 00:20:59.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.781 "adrfam": "ipv4", 00:20:59.781 "trsvcid": "$NVMF_PORT", 00:20:59.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.781 "hdgst": ${hdgst:-false}, 00:20:59.781 "ddgst": ${ddgst:-false} 00:20:59.781 }, 00:20:59.781 "method": "bdev_nvme_attach_controller" 00:20:59.781 } 00:20:59.781 EOF 00:20:59.781 )") 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.781 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.781 { 00:20:59.781 "params": { 00:20:59.781 "name": "Nvme$subsystem", 00:20:59.781 "trtype": "$TEST_TRANSPORT", 00:20:59.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.781 "adrfam": "ipv4", 00:20:59.781 "trsvcid": "$NVMF_PORT", 00:20:59.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.781 "hdgst": ${hdgst:-false}, 00:20:59.781 "ddgst": ${ddgst:-false} 00:20:59.781 }, 00:20:59.781 "method": "bdev_nvme_attach_controller" 00:20:59.781 } 00:20:59.781 EOF 00:20:59.781 )") 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.782 { 00:20:59.782 "params": { 00:20:59.782 "name": "Nvme$subsystem", 00:20:59.782 "trtype": "$TEST_TRANSPORT", 00:20:59.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.782 "adrfam": "ipv4", 00:20:59.782 "trsvcid": "$NVMF_PORT", 00:20:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.782 "hdgst": ${hdgst:-false}, 00:20:59.782 "ddgst": ${ddgst:-false} 00:20:59.782 }, 00:20:59.782 "method": "bdev_nvme_attach_controller" 00:20:59.782 } 00:20:59.782 EOF 00:20:59.782 )") 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.782 { 00:20:59.782 "params": { 00:20:59.782 "name": "Nvme$subsystem", 00:20:59.782 "trtype": "$TEST_TRANSPORT", 00:20:59.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.782 "adrfam": "ipv4", 00:20:59.782 "trsvcid": "$NVMF_PORT", 00:20:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.782 "hdgst": ${hdgst:-false}, 00:20:59.782 "ddgst": ${ddgst:-false} 00:20:59.782 }, 00:20:59.782 "method": "bdev_nvme_attach_controller" 00:20:59.782 } 00:20:59.782 EOF 00:20:59.782 )") 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.782 { 00:20:59.782 "params": { 00:20:59.782 "name": "Nvme$subsystem", 00:20:59.782 "trtype": "$TEST_TRANSPORT", 00:20:59.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.782 "adrfam": "ipv4", 00:20:59.782 "trsvcid": "$NVMF_PORT", 00:20:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.782 "hdgst": ${hdgst:-false}, 00:20:59.782 "ddgst": ${ddgst:-false} 00:20:59.782 }, 00:20:59.782 "method": "bdev_nvme_attach_controller" 00:20:59.782 } 00:20:59.782 EOF 00:20:59.782 )") 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.782 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.782 { 00:20:59.782 "params": { 00:20:59.782 "name": "Nvme$subsystem", 00:20:59.782 "trtype": "$TEST_TRANSPORT", 00:20:59.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.782 "adrfam": "ipv4", 00:20:59.782 "trsvcid": "$NVMF_PORT", 00:20:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.782 "hdgst": ${hdgst:-false}, 00:20:59.782 "ddgst": ${ddgst:-false} 00:20:59.782 }, 00:20:59.782 "method": "bdev_nvme_attach_controller" 00:20:59.782 } 00:20:59.782 EOF 00:20:59.782 )") 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.041 { 00:21:00.041 "params": { 00:21:00.041 "name": "Nvme$subsystem", 00:21:00.041 "trtype": "$TEST_TRANSPORT", 00:21:00.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.041 "adrfam": "ipv4", 00:21:00.041 "trsvcid": "$NVMF_PORT", 00:21:00.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.041 "hdgst": ${hdgst:-false}, 00:21:00.041 "ddgst": ${ddgst:-false} 00:21:00.041 }, 00:21:00.041 "method": "bdev_nvme_attach_controller" 00:21:00.041 } 00:21:00.041 EOF 00:21:00.041 )") 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.041 { 00:21:00.041 "params": { 00:21:00.041 "name": "Nvme$subsystem", 00:21:00.041 "trtype": "$TEST_TRANSPORT", 00:21:00.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.041 "adrfam": "ipv4", 00:21:00.041 "trsvcid": "$NVMF_PORT", 00:21:00.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.041 "hdgst": ${hdgst:-false}, 00:21:00.041 "ddgst": ${ddgst:-false} 00:21:00.041 }, 00:21:00.041 "method": "bdev_nvme_attach_controller" 00:21:00.041 } 00:21:00.041 EOF 00:21:00.041 )") 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.041 { 00:21:00.041 "params": { 00:21:00.041 "name": "Nvme$subsystem", 00:21:00.041 "trtype": "$TEST_TRANSPORT", 00:21:00.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.041 "adrfam": "ipv4", 00:21:00.041 "trsvcid": "$NVMF_PORT", 00:21:00.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.041 "hdgst": ${hdgst:-false}, 00:21:00.041 "ddgst": ${ddgst:-false} 00:21:00.041 }, 00:21:00.041 "method": "bdev_nvme_attach_controller" 00:21:00.041 } 00:21:00.041 EOF 00:21:00.041 )") 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:00.041 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:00.041 "params": { 00:21:00.041 "name": "Nvme1", 00:21:00.041 "trtype": "tcp", 00:21:00.041 "traddr": "10.0.0.2", 00:21:00.041 "adrfam": "ipv4", 00:21:00.041 "trsvcid": "4420", 00:21:00.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.041 "hdgst": false, 00:21:00.041 "ddgst": false 00:21:00.041 }, 00:21:00.041 "method": "bdev_nvme_attach_controller" 00:21:00.041 },{ 00:21:00.041 "params": { 00:21:00.041 "name": "Nvme2", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme3", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme4", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme5", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme6", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme7", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme8", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme9", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 },{ 00:21:00.042 "params": { 00:21:00.042 "name": "Nvme10", 00:21:00.042 "trtype": "tcp", 00:21:00.042 "traddr": "10.0.0.2", 00:21:00.042 "adrfam": "ipv4", 00:21:00.042 "trsvcid": "4420", 00:21:00.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:00.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:00.042 "hdgst": false, 00:21:00.042 "ddgst": false 00:21:00.042 }, 00:21:00.042 "method": "bdev_nvme_attach_controller" 00:21:00.042 }' 00:21:00.042 [2024-11-26 21:01:50.740848] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:00.042 [2024-11-26 21:01:50.740926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020374 ] 00:21:00.042 [2024-11-26 21:01:50.813317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.042 [2024-11-26 21:01:50.872632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.942 Running I/O for 10 seconds... 00:21:01.942 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.942 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:01.942 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.942 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.942 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:02.202 21:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:02.462 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=141 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 141 -ge 100 ']' 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4020194 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4020194 ']' 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4020194 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4020194 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4020194' 00:21:02.739 killing process with pid 4020194 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 4020194 00:21:02.739 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 4020194 00:21:02.739 [2024-11-26 21:01:53.558519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.558997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.739 [2024-11-26 21:01:53.559008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.559560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90bce0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.740 [2024-11-26 21:01:53.561768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.561997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.562146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb552c0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.563997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.741 [2024-11-26 21:01:53.564308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.564734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c1b0 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.567834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.567887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.567930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.567990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.568174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 he state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-11-26 21:01:53.568280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 he state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.742 [2024-11-26 21:01:53.568345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.742 [2024-11-26 21:01:53.568355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.742 [2024-11-26 21:01:53.568370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t[2024-11-26 21:01:53.568476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:02.743 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-11-26 21:01:53.568566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t[2024-11-26 21:01:53.568611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1he state(6) to be set 00:21:02.743 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.568644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-11-26 21:01:53.568681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t[2024-11-26 21:01:53.568761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(6) to be set 00:21:02.743 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.568794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-11-26 21:01:53.568884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t[2024-11-26 21:01:53.568931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1he state(6) to be set 00:21:02.743 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 [2024-11-26 21:01:53.568951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.568963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t[2024-11-26 21:01:53.568958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:02.743 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.743 [2024-11-26 21:01:53.568992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.569013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-11-26 21:01:53.569020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.743 he state(6) to be set 00:21:02.743 [2024-11-26 21:01:53.569036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90cb70 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.569120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.569976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.570011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.570028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.570056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 he state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.570115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 [2024-11-26 21:01:53.570139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-11-26 21:01:53.570163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 he state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.744 he state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.744 [2024-11-26 21:01:53.570212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.744 [2024-11-26 21:01:53.570227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 [2024-11-26 21:01:53.570295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 [2024-11-26 21:01:53.570319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-11-26 21:01:53.570355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 [2024-11-26 21:01:53.570417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 [2024-11-26 21:01:53.570441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 [2024-11-26 21:01:53.570464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 [2024-11-26 21:01:53.570489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-11-26 21:01:53.570513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 [2024-11-26 21:01:53.570539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 [2024-11-26 21:01:53.570569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 [2024-11-26 21:01:53.570593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128[2024-11-26 21:01:53.570616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.745 he state(6) to be set 00:21:02.745 [2024-11-26 21:01:53.570656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-26 21:01:53.570679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 he state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.570732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-11-26 21:01:53.570757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 he state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 he state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with t[2024-11-26 21:01:53.570807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12he state(6) to be set 00:21:02.746 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.570829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.570842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 he state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.570882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d040 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.570893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.570922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.570947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.570972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.571485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.746 [2024-11-26 21:01:53.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.572201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.746 [2024-11-26 21:01:53.572316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.746 [2024-11-26 21:01:53.572340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.746 [2024-11-26 21:01:53.572359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-26 21:01:53.572366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.746 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 21:01:53.572394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with t[2024-11-26 21:01:53.572413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:21:02.747 id:0 cdw10:00000000 cdw11:00000000 00:21:02.747 [2024-11-26 21:01:53.572437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-11-26 21:01:53.572451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-26 21:01:53.572476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 21:01:53.572502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bd9e0 is same [2024-11-26 21:01:53.572529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with twith the state(6) to be set 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 [2024-11-26 21:01:53.572640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-11-26 21:01:53.572668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 [2024-11-26 21:01:53.572707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 21:01:53.572734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-26 21:01:53.572761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 he state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-11-26 21:01:53.572788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 [2024-11-26 21:01:53.572814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-11-26 21:01:53.572838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdfc0 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.747 [2024-11-26 21:01:53.572966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.572986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-11-26 21:01:53.572995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.747 [2024-11-26 21:01:53.573008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-26 21:01:53.573021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 he state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90d510 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f03e0 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92270 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf866f0 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.573789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.573995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.748 [2024-11-26 21:01:53.574019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.748 [2024-11-26 21:01:53.574040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92700 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574181] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.748 [2024-11-26 21:01:53.574469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.748 [2024-11-26 21:01:53.574809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.574989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.575290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d690 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.576462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128he state(6) to be set 00:21:02.749 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.576496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 he state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.749 [2024-11-26 21:01:53.576535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.749 [2024-11-26 21:01:53.576560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.749 [2024-11-26 21:01:53.576572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12[2024-11-26 21:01:53.576584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 he state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.576619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.576639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12he state(6) to be set 00:21:02.750 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.576659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.576680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.576710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12he state(6) to be set 00:21:02.750 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.576732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.576744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 he state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.576784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.576809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-11-26 21:01:53.576833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 he state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.576866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.576886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12he state(6) to be set 00:21:02.750 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.576907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.576919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 he state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.576959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.576980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.576997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.577021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.577045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 he state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.577084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.577093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:02.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.577111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.577121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12he state(6) to be set 00:21:02.750 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.577143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.750 [2024-11-26 21:01:53.577155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t[2024-11-26 21:01:53.577175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12he state(6) to be set 00:21:02.750 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.750 [2024-11-26 21:01:53.577196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.750 [2024-11-26 21:01:53.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.577208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 he state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-11-26 21:01:53.577292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 he state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7db60 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.577337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.577952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.577977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.578012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.578038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.578066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.578091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 [2024-11-26 21:01:53.578101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.578129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with t[2024-11-26 21:01:53.578125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12he state(6) to be set 00:21:02.751 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.751 [2024-11-26 21:01:53.578154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.578160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 21:01:53.578167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.751 he state(6) to be set 00:21:02.751 [2024-11-26 21:01:53.578182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.752 [2024-11-26 21:01:53.578479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.578933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.578957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.579011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.579034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.579062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.579092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.579121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.579144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.579196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.752 [2024-11-26 21:01:53.579225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.752 [2024-11-26 21:01:53.579249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.579789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.579814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a330 is same with the state(6) to be set 00:21:02.753 [2024-11-26 21:01:53.579919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:02.753 [2024-11-26 21:01:53.579980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92270 (9): Bad file descriptor 00:21:02.753 [2024-11-26 21:01:53.580063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.580910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.580960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.581192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.581295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.581401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.753 [2024-11-26 21:01:53.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.753 [2024-11-26 21:01:53.581563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.581611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.581666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.581741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.581799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.581850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.581908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.581958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.582608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.582664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.590485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.590960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54dd0 is same with the state(6) to be set 00:21:02.754 [2024-11-26 21:01:53.595364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.595401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.595429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.595455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.595481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.595508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.754 [2024-11-26 21:01:53.595533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.754 [2024-11-26 21:01:53.595558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.595961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.595985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.596953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.596980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.597006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.597033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.597059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.597086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.597110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.597136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.755 [2024-11-26 21:01:53.597160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.755 [2024-11-26 21:01:53.597187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11966a0 is same with the state(6) to be set 00:21:02.755 [2024-11-26 21:01:53.599340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:02.755 [2024-11-26 21:01:53.599404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f03e0 (9): Bad file descriptor 00:21:02.755 [2024-11-26 21:01:53.599472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bd9e0 (9): Bad file descriptor 00:21:02.755 [2024-11-26 21:01:53.599554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.755 [2024-11-26 21:01:53.599585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.599611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.599659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.599683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.599720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.599746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.599767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ff270 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.599816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdfc0 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.599891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.599922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefa110 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.600217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f0200 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.600497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.756 [2024-11-26 21:01:53.600672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.756 [2024-11-26 21:01:53.600704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ff450 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.600753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf866f0 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.600801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92700 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.602312] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.756 [2024-11-26 21:01:53.603306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.756 [2024-11-26 21:01:53.603535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.756 [2024-11-26 21:01:53.603575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf92270 with addr=10.0.0.2, port=4420 00:21:02.756 [2024-11-26 21:01:53.603603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92270 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.603785] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.756 [2024-11-26 21:01:53.603881] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.756 [2024-11-26 21:01:53.604714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.756 [2024-11-26 21:01:53.604752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f03e0 with addr=10.0.0.2, port=4420 00:21:02.756 [2024-11-26 21:01:53.604779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f03e0 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.604913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.756 [2024-11-26 21:01:53.604948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf92700 with addr=10.0.0.2, port=4420 00:21:02.756 [2024-11-26 21:01:53.604976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92700 is same with the state(6) to be set 00:21:02.756 [2024-11-26 21:01:53.605034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92270 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.605554] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.756 [2024-11-26 21:01:53.605666] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.756 [2024-11-26 21:01:53.605899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f03e0 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.605939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92700 (9): Bad file descriptor 00:21:02.756 [2024-11-26 21:01:53.605970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:02.756 [2024-11-26 21:01:53.605995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:02.756 [2024-11-26 21:01:53.606021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:02.757 [2024-11-26 21:01:53.606048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:02.757 [2024-11-26 21:01:53.606212] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.757 [2024-11-26 21:01:53.606314] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:02.757 [2024-11-26 21:01:53.606368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:02.757 [2024-11-26 21:01:53.606396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:02.757 [2024-11-26 21:01:53.606418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:02.757 [2024-11-26 21:01:53.606443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:02.757 [2024-11-26 21:01:53.606467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:02.757 [2024-11-26 21:01:53.606489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:02.757 [2024-11-26 21:01:53.606512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.757 [2024-11-26 21:01:53.606535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.757 [2024-11-26 21:01:53.609383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff270 (9): Bad file descriptor 00:21:02.757 [2024-11-26 21:01:53.609458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefa110 (9): Bad file descriptor 00:21:02.757 [2024-11-26 21:01:53.609511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f0200 (9): Bad file descriptor 00:21:02.757 [2024-11-26 21:01:53.609562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff450 (9): Bad file descriptor 00:21:02.757 [2024-11-26 21:01:53.609795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.609830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.609876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.609904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.609936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.609961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.610966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.610992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.757 [2024-11-26 21:01:53.611020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.757 [2024-11-26 21:01:53.611045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.611959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.611989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.758 [2024-11-26 21:01:53.612638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.758 [2024-11-26 21:01:53.612662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.612742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.612796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.612849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.612956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.612994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.613282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.613309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1197950 is same with the state(6) to be set 00:21:02.759 [2024-11-26 21:01:53.614882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.614914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.614956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.614983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.759 [2024-11-26 21:01:53.615813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.759 [2024-11-26 21:01:53.615845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.615869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.615898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.615923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.615951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.615976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.616960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.616999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.760 [2024-11-26 21:01:53.617540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.760 [2024-11-26 21:01:53.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.617954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.617981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.618379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.618405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392e20 is same with the state(6) to be set 00:21:02.761 [2024-11-26 21:01:53.619940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.619973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.761 [2024-11-26 21:01:53.620647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.761 [2024-11-26 21:01:53.620693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.620750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.620803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.620855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.620908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.620961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.620997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.621949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.621974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.622007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.622034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.622062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.622089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.762 [2024-11-26 21:01:53.622115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.762 [2024-11-26 21:01:53.622141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.622951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.622980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.763 [2024-11-26 21:01:53.623435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.763 [2024-11-26 21:01:53.623460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1394240 is same with the state(6) to be set 00:21:02.763 [2024-11-26 21:01:53.625047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:02.763 [2024-11-26 21:01:53.625092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:02.763 [2024-11-26 21:01:53.625125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:02.763 [2024-11-26 21:01:53.625310] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:02.763 [2024-11-26 21:01:53.625433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:02.763 [2024-11-26 21:01:53.625754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.763 [2024-11-26 21:01:53.625797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf866f0 with addr=10.0.0.2, port=4420 00:21:02.763 [2024-11-26 21:01:53.625825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf866f0 is same with the state(6) to be set 00:21:02.763 [2024-11-26 21:01:53.625959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.763 [2024-11-26 21:01:53.625993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bdfc0 with addr=10.0.0.2, port=4420 00:21:02.763 [2024-11-26 21:01:53.626020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdfc0 is same with the state(6) to be set 00:21:02.764 [2024-11-26 21:01:53.626161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.764 [2024-11-26 21:01:53.626195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bd9e0 with addr=10.0.0.2, port=4420 00:21:02.764 [2024-11-26 21:01:53.626222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bd9e0 is same with the state(6) to be set 00:21:02.764 [2024-11-26 21:01:53.627295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.627973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.627997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.764 [2024-11-26 21:01:53.628865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.764 [2024-11-26 21:01:53.628899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.628925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.628976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.629954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.629982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.765 [2024-11-26 21:01:53.630596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.765 [2024-11-26 21:01:53.630625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.630649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.630677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.630714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.630745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.630769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.630794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395780 is same with the state(6) to be set 00:21:02.766 [2024-11-26 21:01:53.632366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.632939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.632965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.766 [2024-11-26 21:01:53.633868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.766 [2024-11-26 21:01:53.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.633928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.633952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.633981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.634946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.634973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.767 [2024-11-26 21:01:53.635400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.767 [2024-11-26 21:01:53.635428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.635832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.635858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396cc0 is same with the state(6) to be set 00:21:02.768 [2024-11-26 21:01:53.637394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.637966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.637990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.768 [2024-11-26 21:01:53.638411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.768 [2024-11-26 21:01:53.638437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.638951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.638980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.639960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.639985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.640014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.640038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.769 [2024-11-26 21:01:53.640066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.769 [2024-11-26 21:01:53.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.640858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.640885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398200 is same with the state(6) to be set 00:21:02.770 [2024-11-26 21:01:53.642444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.642942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.642978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.770 [2024-11-26 21:01:53.643782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.770 [2024-11-26 21:01:53.643811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.643836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.643865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.643891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.643918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.643943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.643981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.644966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.644998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.645958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.771 [2024-11-26 21:01:53.645984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.771 [2024-11-26 21:01:53.646009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399740 is same with the state(6) to be set 00:21:02.771 [2024-11-26 21:01:53.648069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.771 [2024-11-26 21:01:53.648121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:02.772 [2024-11-26 21:01:53.648157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:02.772 [2024-11-26 21:01:53.648189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:02.772 [2024-11-26 21:01:53.648221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:02.772 [2024-11-26 21:01:53.648589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.772 [2024-11-26 21:01:53.648634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf92270 with addr=10.0.0.2, port=4420 00:21:02.772 [2024-11-26 21:01:53.648663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92270 is same with the state(6) to be set 00:21:02.772 [2024-11-26 21:01:53.648729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf866f0 (9): Bad file descriptor 00:21:02.772 [2024-11-26 21:01:53.648768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdfc0 (9): Bad file descriptor 00:21:02.772 [2024-11-26 21:01:53.648802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bd9e0 (9): Bad file descriptor 00:21:02.772 [2024-11-26 21:01:53.648878] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:02.772 [2024-11-26 21:01:53.648923] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:02.772 [2024-11-26 21:01:53.648959] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:02.772 [2024-11-26 21:01:53.648993] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:02.772 [2024-11-26 21:01:53.649028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92270 (9): Bad file descriptor 00:21:03.031 task offset: 27648 on job bdev=Nvme3n1 fails 00:21:03.031 00:21:03.031 Latency(us) 00:21:03.031 [2024-11-26T20:01:53.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.031 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.031 Job: Nvme1n1 ended in about 0.93 seconds with error 00:21:03.031 Verification LBA range: start 0x0 length 0x400 00:21:03.031 Nvme1n1 : 0.93 209.97 13.12 68.56 0.00 227200.24 9514.86 256318.58 00:21:03.031 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.031 Job: Nvme2n1 ended in about 0.95 seconds with error 00:21:03.031 Verification LBA range: start 0x0 length 0x400 00:21:03.031 Nvme2n1 : 0.95 135.32 8.46 67.66 0.00 305910.01 21748.24 262532.36 00:21:03.031 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.031 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:03.031 Verification LBA range: start 0x0 length 0x400 00:21:03.031 Nvme3n1 : 0.91 211.71 13.23 70.57 0.00 215118.51 7621.59 264085.81 00:21:03.031 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.031 Job: Nvme4n1 ended in about 0.95 seconds with error 00:21:03.031 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme4n1 : 0.95 206.10 12.88 67.30 0.00 218137.42 18544.26 248551.35 00:21:03.032 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme5n1 ended in about 0.96 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme5n1 : 0.96 133.89 8.37 66.94 0.00 291179.84 21748.24 315349.52 00:21:03.032 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme6n1 ended in about 0.96 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme6n1 : 0.96 132.87 8.30 66.44 0.00 287532.12 22039.51 267192.70 00:21:03.032 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme7n1 ended in about 0.97 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme7n1 : 0.97 132.18 8.26 66.09 0.00 283205.53 21262.79 302921.96 00:21:03.032 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme8n1 ended in about 0.97 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme8n1 : 0.97 131.50 8.22 65.75 0.00 278988.42 22330.79 259425.47 00:21:03.032 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme9n1 ended in about 0.98 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme9n1 : 0.98 130.81 8.18 65.41 0.00 274913.41 20388.98 262532.36 00:21:03.032 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.032 Job: Nvme10n1 ended in about 0.93 seconds with error 00:21:03.032 Verification LBA range: start 0x0 length 0x400 00:21:03.032 Nvme10n1 : 0.93 141.85 8.87 68.78 0.00 248031.96 23884.23 298261.62 00:21:03.032 [2024-11-26T20:01:53.970Z] =================================================================================================================== 00:21:03.032 [2024-11-26T20:01:53.970Z] Total : 1566.20 97.89 673.49 0.00 258966.30 7621.59 315349.52 00:21:03.032 [2024-11-26 21:01:53.676136] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:03.032 [2024-11-26 21:01:53.676229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:03.032 [2024-11-26 21:01:53.676564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.676605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf92700 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.676638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92700 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.676800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.676832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f03e0 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.676859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f03e0 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.677000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.677032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa110 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.677065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefa110 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.677206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.677236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ff270 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.677263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ff270 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.677393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ff450 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.677453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ff450 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.677484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:03.032 [2024-11-26 21:01:53.677510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:03.032 [2024-11-26 21:01:53.677535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:03.032 [2024-11-26 21:01:53.677563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:03.032 [2024-11-26 21:01:53.677592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:03.032 [2024-11-26 21:01:53.677614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:03.032 [2024-11-26 21:01:53.677637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:03.032 [2024-11-26 21:01:53.677660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:03.032 [2024-11-26 21:01:53.677697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:03.032 [2024-11-26 21:01:53.677722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:03.032 [2024-11-26 21:01:53.677744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:03.032 [2024-11-26 21:01:53.677768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:03.032 [2024-11-26 21:01:53.679232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.032 [2024-11-26 21:01:53.679265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f0200 with addr=10.0.0.2, port=4420 00:21:03.032 [2024-11-26 21:01:53.679292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f0200 is same with the state(6) to be set 00:21:03.032 [2024-11-26 21:01:53.679329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92700 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.679367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f03e0 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.679398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefa110 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.679432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff270 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.679463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff450 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.679491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:03.032 [2024-11-26 21:01:53.679513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:03.032 [2024-11-26 21:01:53.679536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:03.032 [2024-11-26 21:01:53.679564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:03.032 [2024-11-26 21:01:53.679675] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:03.032 [2024-11-26 21:01:53.679727] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:03.032 [2024-11-26 21:01:53.679761] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:03.032 [2024-11-26 21:01:53.679792] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:03.032 [2024-11-26 21:01:53.679825] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:03.032 [2024-11-26 21:01:53.680325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f0200 (9): Bad file descriptor 00:21:03.032 [2024-11-26 21:01:53.680356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:03.032 [2024-11-26 21:01:53.680381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:03.032 [2024-11-26 21:01:53.680404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:03.032 [2024-11-26 21:01:53.680426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:03.032 [2024-11-26 21:01:53.680451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.680473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.680494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.680517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.680539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.680560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.680582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.680604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.680629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.680649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.680670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.680712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.680736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.680759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.680779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.680800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.680917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:03.033 [2024-11-26 21:01:53.680952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:03.033 [2024-11-26 21:01:53.680990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:03.033 [2024-11-26 21:01:53.681018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:03.033 [2024-11-26 21:01:53.681087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.681115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.681136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.681159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.681326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.033 [2024-11-26 21:01:53.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bd9e0 with addr=10.0.0.2, port=4420 00:21:03.033 [2024-11-26 21:01:53.681388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bd9e0 is same with the state(6) to be set 00:21:03.033 [2024-11-26 21:01:53.681527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.033 [2024-11-26 21:01:53.681561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bdfc0 with addr=10.0.0.2, port=4420 00:21:03.033 [2024-11-26 21:01:53.681586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdfc0 is same with the state(6) to be set 00:21:03.033 [2024-11-26 21:01:53.681721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.033 [2024-11-26 21:01:53.681756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf866f0 with addr=10.0.0.2, port=4420 00:21:03.033 [2024-11-26 21:01:53.681781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf866f0 is same with the state(6) to be set 00:21:03.033 [2024-11-26 21:01:53.681906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.033 [2024-11-26 21:01:53.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf92270 with addr=10.0.0.2, port=4420 00:21:03.033 [2024-11-26 21:01:53.681966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf92270 is same with the state(6) to be set 00:21:03.033 [2024-11-26 21:01:53.682042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bd9e0 (9): Bad file descriptor 00:21:03.033 [2024-11-26 21:01:53.682080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdfc0 (9): Bad file descriptor 00:21:03.033 [2024-11-26 21:01:53.682111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf866f0 (9): Bad file descriptor 00:21:03.033 [2024-11-26 21:01:53.682145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92270 (9): Bad file descriptor 00:21:03.033 [2024-11-26 21:01:53.682204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.682231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.682255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.682276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.682301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.682322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.682350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.682373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.682396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.682419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.682439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.682460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:03.033 [2024-11-26 21:01:53.682484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:03.033 [2024-11-26 21:01:53.682506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:03.033 [2024-11-26 21:01:53.682529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:03.033 [2024-11-26 21:01:53.682549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:03.294 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4020374 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4020374 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 4020374 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.233 rmmod nvme_tcp 00:21:04.233 rmmod nvme_fabrics 00:21:04.233 rmmod nvme_keyring 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 4020194 ']' 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 4020194 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4020194 ']' 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4020194 00:21:04.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4020194) - No such process 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4020194 is not found' 00:21:04.233 Process with pid 4020194 is not found 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:04.233 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:04.492 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.492 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.492 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.492 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.492 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.399 00:21:06.399 real 0m7.628s 00:21:06.399 user 0m18.756s 00:21:06.399 sys 0m1.581s 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.399 ************************************ 00:21:06.399 END TEST nvmf_shutdown_tc3 00:21:06.399 ************************************ 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:06.399 ************************************ 00:21:06.399 START TEST nvmf_shutdown_tc4 00:21:06.399 ************************************ 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:06.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:06.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.399 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:06.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:06.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.400 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.659 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:06.659 00:21:06.659 --- 10.0.0.2 ping statistics --- 00:21:06.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.660 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:21:06.660 00:21:06.660 --- 10.0.0.1 ping statistics --- 00:21:06.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.660 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=4021279 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 4021279 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 4021279 ']' 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.660 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 [2024-11-26 21:01:57.504572] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:06.660 [2024-11-26 21:01:57.504665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.660 [2024-11-26 21:01:57.577395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.919 [2024-11-26 21:01:57.639859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.919 [2024-11-26 21:01:57.639924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.919 [2024-11-26 21:01:57.639942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.919 [2024-11-26 21:01:57.639955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.919 [2024-11-26 21:01:57.639967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.919 [2024-11-26 21:01:57.641638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.919 [2024-11-26 21:01:57.641757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.919 [2024-11-26 21:01:57.641822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.919 [2024-11-26 21:01:57.641826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.919 [2024-11-26 21:01:57.790869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.919 21:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.178 Malloc1 00:21:07.178 [2024-11-26 21:01:57.889334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.178 Malloc2 00:21:07.178 Malloc3 00:21:07.178 Malloc4 00:21:07.178 Malloc5 00:21:07.178 Malloc6 00:21:07.436 Malloc7 00:21:07.436 Malloc8 00:21:07.436 Malloc9 00:21:07.436 Malloc10 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4021457 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:07.436 21:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:07.694 [2024-11-26 21:01:58.425597] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4021279 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4021279 ']' 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4021279 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021279 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021279' 00:21:13.030 killing process with pid 4021279 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 4021279 00:21:13.030 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 4021279 00:21:13.030 [2024-11-26 21:02:03.413452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.413652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc588f0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 [2024-11-26 21:02:03.415281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc592d0 is same with the state(6) to be set 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 starting I/O failed: -6 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 starting I/O failed: -6 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 starting I/O failed: -6 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.030 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.427613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.428646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.428693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with tstarting I/O failed: -6 00:21:13.031 he state(6) to be set 00:21:13.031 [2024-11-26 21:02:03.428714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.428727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.428739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.428761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.428773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.428785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc597c0 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.429498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c370 is same with the state(6) to be set 00:21:13.031 [2024-11-26 21:02:03.429531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c370 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.429545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c370 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.429558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c370 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.429570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c370 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.429943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.031 he state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.429977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.430002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with the state(6) to be set 00:21:13.031 [2024-11-26 21:02:03.430016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.031 he state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 [2024-11-26 21:02:03.430030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with the state(6) to be set 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 [2024-11-26 21:02:03.430043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c860 is same with the state(6) to be set 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.031 starting I/O failed: -6 00:21:13.031 Write completed with error (sct=0, sc=8) 00:21:13.032 [2024-11-26 21:02:03.430225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.032 he state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.032 he state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 [2024-11-26 21:02:03.430513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 [2024-11-26 21:02:03.430525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 [2024-11-26 21:02:03.430549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.032 he state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.430589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 [2024-11-26 21:02:03.430602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5cd30 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.431228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.431255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 [2024-11-26 21:02:03.431269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.431282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 [2024-11-26 21:02:03.431294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with tWrite completed with error (sct=0, sc=8) 00:21:13.032 he state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.431307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 [2024-11-26 21:02:03.431319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 [2024-11-26 21:02:03.431331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.431343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bea0 is same with the state(6) to be set 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 [2024-11-26 21:02:03.432252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.032 NVMe io qpair process completion error 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 starting I/O failed: -6 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.032 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 [2024-11-26 21:02:03.433734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 [2024-11-26 21:02:03.434788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 [2024-11-26 21:02:03.435995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.033 starting I/O failed: -6 00:21:13.033 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 [2024-11-26 21:02:03.438078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.034 NVMe io qpair process completion error 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 [2024-11-26 21:02:03.439245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 [2024-11-26 21:02:03.440390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.034 starting I/O failed: -6 00:21:13.034 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 [2024-11-26 21:02:03.441588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 [2024-11-26 21:02:03.443696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.035 NVMe io qpair process completion error 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.035 starting I/O failed: -6 00:21:13.035 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 [2024-11-26 21:02:03.445115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.036 starting I/O failed: -6 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 [2024-11-26 21:02:03.446203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 [2024-11-26 21:02:03.447390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.036 starting I/O failed: -6 00:21:13.036 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 [2024-11-26 21:02:03.449936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.037 NVMe io qpair process completion error 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 [2024-11-26 21:02:03.451030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 starting I/O failed: -6 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.037 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 [2024-11-26 21:02:03.452143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 [2024-11-26 21:02:03.453343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 [2024-11-26 21:02:03.455614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.038 NVMe io qpair process completion error 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 starting I/O failed: -6 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.038 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 [2024-11-26 21:02:03.456962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 [2024-11-26 21:02:03.458125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 [2024-11-26 21:02:03.459287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.039 Write completed with error (sct=0, sc=8) 00:21:13.039 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 [2024-11-26 21:02:03.461765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.040 NVMe io qpair process completion error 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 Write completed with error (sct=0, sc=8) 00:21:13.040 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.041 Write completed with error (sct=0, sc=8) 00:21:13.041 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 [2024-11-26 21:02:03.467765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.042 NVMe io qpair process completion error 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 [2024-11-26 21:02:03.469441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 [2024-11-26 21:02:03.470551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.042 Write completed with error (sct=0, sc=8) 00:21:13.042 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 [2024-11-26 21:02:03.473558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.043 NVMe io qpair process completion error 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 [2024-11-26 21:02:03.474861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 starting I/O failed: -6 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.043 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 [2024-11-26 21:02:03.475992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 [2024-11-26 21:02:03.477172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.044 starting I/O failed: -6 00:21:13.044 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 [2024-11-26 21:02:03.479297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.045 NVMe io qpair process completion error 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 [2024-11-26 21:02:03.480644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 [2024-11-26 21:02:03.481760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 starting I/O failed: -6 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.045 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 [2024-11-26 21:02:03.482933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 Write completed with error (sct=0, sc=8) 00:21:13.046 starting I/O failed: -6 00:21:13.046 [2024-11-26 21:02:03.486268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.046 NVMe io qpair process completion error 00:21:13.046 Initializing NVMe Controllers 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:13.046 Controller IO queue size 128, less than required. 00:21:13.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:13.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:13.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:13.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:13.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:13.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:13.047 Initialization complete. Launching workers. 00:21:13.047 ======================================================== 00:21:13.047 Latency(us) 00:21:13.047 Device Information : IOPS MiB/s Average min max 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1592.44 68.43 80403.21 1106.66 131808.25 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1589.86 68.31 80573.60 1174.81 132011.68 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1639.53 70.45 78163.76 997.27 129767.86 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1731.77 74.41 74053.57 818.79 129487.39 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1638.67 70.41 78293.89 1130.83 151823.79 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1661.89 71.41 77239.17 898.30 154449.69 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1675.87 72.01 75670.84 871.83 127811.05 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1658.02 71.24 76521.26 1033.46 132973.10 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1587.92 68.23 79932.85 1143.94 135804.38 00:21:13.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1580.40 67.91 80355.91 842.28 128816.02 00:21:13.047 ======================================================== 00:21:13.047 Total : 16356.37 702.81 78061.59 818.79 154449.69 00:21:13.047 00:21:13.047 [2024-11-26 21:02:03.492052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a2c0 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a5f0 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4b720 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe499e0 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4ac50 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49d10 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4bae0 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe496b0 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4b900 is same with the state(6) to be set 00:21:13.047 [2024-11-26 21:02:03.492890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a920 is same with the state(6) to be set 00:21:13.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:13.047 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4021457 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4021457 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 4021457 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.998 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.257 rmmod nvme_tcp 00:21:14.257 rmmod nvme_fabrics 00:21:14.257 rmmod nvme_keyring 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 4021279 ']' 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 4021279 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4021279 ']' 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4021279 00:21:14.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4021279) - No such process 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4021279 is not found' 00:21:14.257 Process with pid 4021279 is not found 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.257 21:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.257 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:14.257 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:14.257 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.257 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.258 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.258 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.258 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.258 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.258 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.158 00:21:16.158 real 0m9.785s 00:21:16.158 user 0m22.078s 00:21:16.158 sys 0m6.107s 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 ************************************ 00:21:16.158 END TEST nvmf_shutdown_tc4 00:21:16.158 ************************************ 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:16.158 00:21:16.158 real 0m37.142s 00:21:16.158 user 1m37.603s 00:21:16.158 sys 0m12.768s 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.158 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 ************************************ 00:21:16.158 END TEST nvmf_shutdown 00:21:16.158 ************************************ 00:21:16.416 21:02:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:16.416 21:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.416 21:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.416 21:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.416 ************************************ 00:21:16.416 START TEST nvmf_nsid 00:21:16.416 ************************************ 00:21:16.416 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:16.416 * Looking for test storage... 00:21:16.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.417 --rc genhtml_branch_coverage=1 00:21:16.417 --rc genhtml_function_coverage=1 00:21:16.417 --rc genhtml_legend=1 00:21:16.417 --rc geninfo_all_blocks=1 00:21:16.417 --rc geninfo_unexecuted_blocks=1 00:21:16.417 00:21:16.417 ' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.417 --rc genhtml_branch_coverage=1 00:21:16.417 --rc genhtml_function_coverage=1 00:21:16.417 --rc genhtml_legend=1 00:21:16.417 --rc geninfo_all_blocks=1 00:21:16.417 --rc geninfo_unexecuted_blocks=1 00:21:16.417 00:21:16.417 ' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.417 --rc genhtml_branch_coverage=1 00:21:16.417 --rc genhtml_function_coverage=1 00:21:16.417 --rc genhtml_legend=1 00:21:16.417 --rc geninfo_all_blocks=1 00:21:16.417 --rc geninfo_unexecuted_blocks=1 00:21:16.417 00:21:16.417 ' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.417 --rc genhtml_branch_coverage=1 00:21:16.417 --rc genhtml_function_coverage=1 00:21:16.417 --rc genhtml_legend=1 00:21:16.417 --rc geninfo_all_blocks=1 00:21:16.417 --rc geninfo_unexecuted_blocks=1 00:21:16.417 00:21:16.417 ' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.417 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.418 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.947 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:18.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:18.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:18.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:18.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:18.948 00:21:18.948 --- 10.0.0.2 ping statistics --- 00:21:18.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.948 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:18.948 00:21:18.948 --- 10.0.0.1 ping statistics --- 00:21:18.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.948 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4024180 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4024180 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4024180 ']' 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.948 [2024-11-26 21:02:09.493095] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:18.948 [2024-11-26 21:02:09.493193] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.948 [2024-11-26 21:02:09.564761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.948 [2024-11-26 21:02:09.620770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.948 [2024-11-26 21:02:09.620819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.948 [2024-11-26 21:02:09.620841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.948 [2024-11-26 21:02:09.620859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.948 [2024-11-26 21:02:09.620875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.948 [2024-11-26 21:02:09.621465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:18.948 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4024226 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ec8168d5-7814-4928-81bf-f346f823acab 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4ba44deb-d2c3-40e1-bac8-043510c636f5 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=868b1f5f-426c-4c28-89ef-83e277c7c792 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.949 null0 00:21:18.949 null1 00:21:18.949 null2 00:21:18.949 [2024-11-26 21:02:09.795209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.949 [2024-11-26 21:02:09.807180] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:18.949 [2024-11-26 21:02:09.807239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024226 ] 00:21:18.949 [2024-11-26 21:02:09.819468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4024226 /var/tmp/tgt2.sock 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4024226 ']' 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.949 21:02:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.949 [2024-11-26 21:02:09.879434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.206 [2024-11-26 21:02:09.941457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.464 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.464 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:19.464 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:19.721 [2024-11-26 21:02:10.613417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.721 [2024-11-26 21:02:10.629622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:19.721 nvme0n1 nvme0n2 00:21:19.721 nvme1n1 00:21:19.980 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:19.980 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:19.980 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:20.547 21:02:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ec8168d5-7814-4928-81bf-f346f823acab 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ec8168d57814492881bff346f823acab 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EC8168D57814492881BFF346F823ACAB 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EC8168D57814492881BFF346F823ACAB == \E\C\8\1\6\8\D\5\7\8\1\4\4\9\2\8\8\1\B\F\F\3\4\6\F\8\2\3\A\C\A\B ]] 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4ba44deb-d2c3-40e1-bac8-043510c636f5 00:21:21.480 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4ba44debd2c340e1bac8043510c636f5 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4BA44DEBD2C340E1BAC8043510C636F5 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4BA44DEBD2C340E1BAC8043510C636F5 == \4\B\A\4\4\D\E\B\D\2\C\3\4\0\E\1\B\A\C\8\0\4\3\5\1\0\C\6\3\6\F\5 ]] 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 868b1f5f-426c-4c28-89ef-83e277c7c792 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:21.481 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=868b1f5f426c4c2889ef83e277c7c792 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 868B1F5F426C4C2889EF83E277C7C792 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 868B1F5F426C4C2889EF83E277C7C792 == \8\6\8\B\1\F\5\F\4\2\6\C\4\C\2\8\8\9\E\F\8\3\E\2\7\7\C\7\C\7\9\2 ]] 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4024226 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4024226 ']' 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4024226 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.740 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4024226 00:21:21.999 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:21.999 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:21.999 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4024226' 00:21:21.999 killing process with pid 4024226 00:21:21.999 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4024226 00:21:21.999 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4024226 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.257 rmmod nvme_tcp 00:21:22.257 rmmod nvme_fabrics 00:21:22.257 rmmod nvme_keyring 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4024180 ']' 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4024180 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4024180 ']' 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4024180 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.257 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4024180 00:21:22.515 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.515 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.515 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4024180' 00:21:22.515 killing process with pid 4024180 00:21:22.515 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4024180 00:21:22.515 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4024180 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.775 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:24.682 00:21:24.682 real 0m8.390s 00:21:24.682 user 0m8.469s 00:21:24.682 sys 0m2.567s 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:24.682 ************************************ 00:21:24.682 END TEST nvmf_nsid 00:21:24.682 ************************************ 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:24.682 00:21:24.682 real 11m57.322s 00:21:24.682 user 28m9.282s 00:21:24.682 sys 2m48.640s 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.682 21:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.682 ************************************ 00:21:24.682 END TEST nvmf_target_extra 00:21:24.682 ************************************ 00:21:24.682 21:02:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:24.682 21:02:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.682 21:02:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.682 21:02:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.682 ************************************ 00:21:24.682 START TEST nvmf_host 00:21:24.682 ************************************ 00:21:24.682 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:24.942 * Looking for test storage... 00:21:24.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.942 --rc genhtml_branch_coverage=1 00:21:24.942 --rc genhtml_function_coverage=1 00:21:24.942 --rc genhtml_legend=1 00:21:24.942 --rc geninfo_all_blocks=1 00:21:24.942 --rc geninfo_unexecuted_blocks=1 00:21:24.942 00:21:24.942 ' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.942 --rc genhtml_branch_coverage=1 00:21:24.942 --rc genhtml_function_coverage=1 00:21:24.942 --rc genhtml_legend=1 00:21:24.942 --rc geninfo_all_blocks=1 00:21:24.942 --rc geninfo_unexecuted_blocks=1 00:21:24.942 00:21:24.942 ' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.942 --rc genhtml_branch_coverage=1 00:21:24.942 --rc genhtml_function_coverage=1 00:21:24.942 --rc genhtml_legend=1 00:21:24.942 --rc geninfo_all_blocks=1 00:21:24.942 --rc geninfo_unexecuted_blocks=1 00:21:24.942 00:21:24.942 ' 00:21:24.942 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.942 --rc genhtml_branch_coverage=1 00:21:24.943 --rc genhtml_function_coverage=1 00:21:24.943 --rc genhtml_legend=1 00:21:24.943 --rc geninfo_all_blocks=1 00:21:24.943 --rc geninfo_unexecuted_blocks=1 00:21:24.943 00:21:24.943 ' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.943 ************************************ 00:21:24.943 START TEST nvmf_multicontroller 00:21:24.943 ************************************ 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:24.943 * Looking for test storage... 00:21:24.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.943 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.203 --rc genhtml_branch_coverage=1 00:21:25.203 --rc genhtml_function_coverage=1 00:21:25.203 --rc genhtml_legend=1 00:21:25.203 --rc geninfo_all_blocks=1 00:21:25.203 --rc geninfo_unexecuted_blocks=1 00:21:25.203 00:21:25.203 ' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.203 --rc genhtml_branch_coverage=1 00:21:25.203 --rc genhtml_function_coverage=1 00:21:25.203 --rc genhtml_legend=1 00:21:25.203 --rc geninfo_all_blocks=1 00:21:25.203 --rc geninfo_unexecuted_blocks=1 00:21:25.203 00:21:25.203 ' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.203 --rc genhtml_branch_coverage=1 00:21:25.203 --rc genhtml_function_coverage=1 00:21:25.203 --rc genhtml_legend=1 00:21:25.203 --rc geninfo_all_blocks=1 00:21:25.203 --rc geninfo_unexecuted_blocks=1 00:21:25.203 00:21:25.203 ' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.203 --rc genhtml_branch_coverage=1 00:21:25.203 --rc genhtml_function_coverage=1 00:21:25.203 --rc genhtml_legend=1 00:21:25.203 --rc geninfo_all_blocks=1 00:21:25.203 --rc geninfo_unexecuted_blocks=1 00:21:25.203 00:21:25.203 ' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:25.203 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.204 21:02:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.106 21:02:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:27.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:27.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:27.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:27.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.106 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:21:27.365 00:21:27.365 --- 10.0.0.2 ping statistics --- 00:21:27.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.365 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:27.365 00:21:27.365 --- 10.0.0.1 ping statistics --- 00:21:27.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.365 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.365 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4026666 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4026666 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4026666 ']' 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.366 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.366 [2024-11-26 21:02:18.212504] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:27.366 [2024-11-26 21:02:18.212587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.366 [2024-11-26 21:02:18.289923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.624 [2024-11-26 21:02:18.353211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.624 [2024-11-26 21:02:18.353281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.624 [2024-11-26 21:02:18.353297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.624 [2024-11-26 21:02:18.353311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.624 [2024-11-26 21:02:18.353322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.624 [2024-11-26 21:02:18.354856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.624 [2024-11-26 21:02:18.354922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.624 [2024-11-26 21:02:18.354927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 [2024-11-26 21:02:18.492527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 Malloc0 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.624 [2024-11-26 21:02:18.550832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.624 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.625 [2024-11-26 21:02:18.558726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.883 Malloc1 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4026772 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4026772 /var/tmp/bdevperf.sock 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4026772 ']' 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.883 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.142 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.142 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:28.142 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:28.142 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.142 21:02:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.401 NVMe0n1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.401 1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.401 request: 00:21:28.401 { 00:21:28.401 "name": "NVMe0", 00:21:28.401 "trtype": "tcp", 00:21:28.401 "traddr": "10.0.0.2", 00:21:28.401 "adrfam": "ipv4", 00:21:28.401 "trsvcid": "4420", 00:21:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.401 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:28.401 "hostaddr": "10.0.0.1", 00:21:28.401 "prchk_reftag": false, 00:21:28.401 "prchk_guard": false, 00:21:28.401 "hdgst": false, 00:21:28.401 "ddgst": false, 00:21:28.401 "allow_unrecognized_csi": false, 00:21:28.401 "method": "bdev_nvme_attach_controller", 00:21:28.401 "req_id": 1 00:21:28.401 } 00:21:28.401 Got JSON-RPC error response 00:21:28.401 response: 00:21:28.401 { 00:21:28.401 "code": -114, 00:21:28.401 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.401 } 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.401 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.402 request: 00:21:28.402 { 00:21:28.402 "name": "NVMe0", 00:21:28.402 "trtype": "tcp", 00:21:28.402 "traddr": "10.0.0.2", 00:21:28.402 "adrfam": "ipv4", 00:21:28.402 "trsvcid": "4420", 00:21:28.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.402 "hostaddr": "10.0.0.1", 00:21:28.402 "prchk_reftag": false, 00:21:28.402 "prchk_guard": false, 00:21:28.402 "hdgst": false, 00:21:28.402 "ddgst": false, 00:21:28.402 "allow_unrecognized_csi": false, 00:21:28.402 "method": "bdev_nvme_attach_controller", 00:21:28.402 "req_id": 1 00:21:28.402 } 00:21:28.402 Got JSON-RPC error response 00:21:28.402 response: 00:21:28.402 { 00:21:28.402 "code": -114, 00:21:28.402 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.402 } 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.402 request: 00:21:28.402 { 00:21:28.402 "name": "NVMe0", 00:21:28.402 "trtype": "tcp", 00:21:28.402 "traddr": "10.0.0.2", 00:21:28.402 "adrfam": "ipv4", 00:21:28.402 "trsvcid": "4420", 00:21:28.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.402 "hostaddr": "10.0.0.1", 00:21:28.402 "prchk_reftag": false, 00:21:28.402 "prchk_guard": false, 00:21:28.402 "hdgst": false, 00:21:28.402 "ddgst": false, 00:21:28.402 "multipath": "disable", 00:21:28.402 "allow_unrecognized_csi": false, 00:21:28.402 "method": "bdev_nvme_attach_controller", 00:21:28.402 "req_id": 1 00:21:28.402 } 00:21:28.402 Got JSON-RPC error response 00:21:28.402 response: 00:21:28.402 { 00:21:28.402 "code": -114, 00:21:28.402 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:28.402 } 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.402 request: 00:21:28.402 { 00:21:28.402 "name": "NVMe0", 00:21:28.402 "trtype": "tcp", 00:21:28.402 "traddr": "10.0.0.2", 00:21:28.402 "adrfam": "ipv4", 00:21:28.402 "trsvcid": "4420", 00:21:28.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.402 "hostaddr": "10.0.0.1", 00:21:28.402 "prchk_reftag": false, 00:21:28.402 "prchk_guard": false, 00:21:28.402 "hdgst": false, 00:21:28.402 "ddgst": false, 00:21:28.402 "multipath": "failover", 00:21:28.402 "allow_unrecognized_csi": false, 00:21:28.402 "method": "bdev_nvme_attach_controller", 00:21:28.402 "req_id": 1 00:21:28.402 } 00:21:28.402 Got JSON-RPC error response 00:21:28.402 response: 00:21:28.402 { 00:21:28.402 "code": -114, 00:21:28.402 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.402 } 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.402 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.658 NVMe0n1 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.658 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.916 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:28.916 21:02:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.850 { 00:21:29.850 "results": [ 00:21:29.850 { 00:21:29.850 "job": "NVMe0n1", 00:21:29.850 "core_mask": "0x1", 00:21:29.850 "workload": "write", 00:21:29.850 "status": "finished", 00:21:29.850 "queue_depth": 128, 00:21:29.850 "io_size": 4096, 00:21:29.850 "runtime": 1.009416, 00:21:29.850 "iops": 18371.01848989911, 00:21:29.850 "mibps": 71.7617909761684, 00:21:29.850 "io_failed": 0, 00:21:29.850 "io_timeout": 0, 00:21:29.850 "avg_latency_us": 6952.910588310485, 00:21:29.850 "min_latency_us": 4223.431111111111, 00:21:29.850 "max_latency_us": 12281.931851851852 00:21:29.850 } 00:21:29.850 ], 00:21:29.850 "core_count": 1 00:21:29.850 } 00:21:29.850 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:29.850 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.850 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4026772 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4026772 ']' 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4026772 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4026772 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4026772' 00:21:30.108 killing process with pid 4026772 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4026772 00:21:30.108 21:02:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4026772 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:30.365 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:30.365 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.365 [2024-11-26 21:02:18.663952] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:30.365 [2024-11-26 21:02:18.664089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026772 ] 00:21:30.365 [2024-11-26 21:02:18.733863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.365 [2024-11-26 21:02:18.792704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.365 [2024-11-26 21:02:19.600462] bdev.c:4762:bdev_name_add: *ERROR*: Bdev name c234801e-b467-47a3-b96c-82aa14a24a3d already exists 00:21:30.365 [2024-11-26 21:02:19.600504] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:c234801e-b467-47a3-b96c-82aa14a24a3d alias for bdev NVMe1n1 00:21:30.365 [2024-11-26 21:02:19.600526] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:30.365 Running I/O for 1 seconds... 00:21:30.365 18306.00 IOPS, 71.51 MiB/s 00:21:30.365 Latency(us) 00:21:30.365 [2024-11-26T20:02:21.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.365 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:30.365 NVMe0n1 : 1.01 18371.02 71.76 0.00 0.00 6952.91 4223.43 12281.93 00:21:30.365 [2024-11-26T20:02:21.303Z] =================================================================================================================== 00:21:30.365 [2024-11-26T20:02:21.303Z] Total : 18371.02 71.76 0.00 0.00 6952.91 4223.43 12281.93 00:21:30.365 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.365 00:21:30.365 Latency(us) 00:21:30.365 [2024-11-26T20:02:21.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.366 [2024-11-26T20:02:21.304Z] =================================================================================================================== 00:21:30.366 [2024-11-26T20:02:21.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.366 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.366 rmmod nvme_tcp 00:21:30.366 rmmod nvme_fabrics 00:21:30.366 rmmod nvme_keyring 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4026666 ']' 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4026666 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4026666 ']' 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4026666 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4026666 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4026666' 00:21:30.366 killing process with pid 4026666 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4026666 00:21:30.366 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4026666 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.624 21:02:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.154 00:21:33.154 real 0m7.742s 00:21:33.154 user 0m12.476s 00:21:33.154 sys 0m2.380s 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.154 ************************************ 00:21:33.154 END TEST nvmf_multicontroller 00:21:33.154 ************************************ 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.154 ************************************ 00:21:33.154 START TEST nvmf_aer 00:21:33.154 ************************************ 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.154 * Looking for test storage... 00:21:33.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.154 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.155 --rc genhtml_branch_coverage=1 00:21:33.155 --rc genhtml_function_coverage=1 00:21:33.155 --rc genhtml_legend=1 00:21:33.155 --rc geninfo_all_blocks=1 00:21:33.155 --rc geninfo_unexecuted_blocks=1 00:21:33.155 00:21:33.155 ' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.155 --rc genhtml_branch_coverage=1 00:21:33.155 --rc genhtml_function_coverage=1 00:21:33.155 --rc genhtml_legend=1 00:21:33.155 --rc geninfo_all_blocks=1 00:21:33.155 --rc geninfo_unexecuted_blocks=1 00:21:33.155 00:21:33.155 ' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.155 --rc genhtml_branch_coverage=1 00:21:33.155 --rc genhtml_function_coverage=1 00:21:33.155 --rc genhtml_legend=1 00:21:33.155 --rc geninfo_all_blocks=1 00:21:33.155 --rc geninfo_unexecuted_blocks=1 00:21:33.155 00:21:33.155 ' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.155 --rc genhtml_branch_coverage=1 00:21:33.155 --rc genhtml_function_coverage=1 00:21:33.155 --rc genhtml_legend=1 00:21:33.155 --rc geninfo_all_blocks=1 00:21:33.155 --rc geninfo_unexecuted_blocks=1 00:21:33.155 00:21:33.155 ' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.155 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.156 21:02:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:35.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:35.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:35.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:35.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.058 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:35.059 00:21:35.059 --- 10.0.0.2 ping statistics --- 00:21:35.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.059 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:35.059 00:21:35.059 --- 10.0.0.1 ping statistics --- 00:21:35.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.059 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4029027 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4029027 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 4029027 ']' 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.059 21:02:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.059 [2024-11-26 21:02:25.980949] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:35.059 [2024-11-26 21:02:25.981040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.317 [2024-11-26 21:02:26.062151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.317 [2024-11-26 21:02:26.127617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.317 [2024-11-26 21:02:26.127694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.317 [2024-11-26 21:02:26.127721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.317 [2024-11-26 21:02:26.127741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.317 [2024-11-26 21:02:26.127757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.317 [2024-11-26 21:02:26.129382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.317 [2024-11-26 21:02:26.129441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.317 [2024-11-26 21:02:26.129445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.317 [2024-11-26 21:02:26.129419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.317 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.317 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:35.317 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.318 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.318 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 [2024-11-26 21:02:26.283082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 Malloc0 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 [2024-11-26 21:02:26.350199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.576 [ 00:21:35.576 { 00:21:35.576 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:35.576 "subtype": "Discovery", 00:21:35.576 "listen_addresses": [], 00:21:35.576 "allow_any_host": true, 00:21:35.576 "hosts": [] 00:21:35.576 }, 00:21:35.576 { 00:21:35.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.576 "subtype": "NVMe", 00:21:35.576 "listen_addresses": [ 00:21:35.576 { 00:21:35.576 "trtype": "TCP", 00:21:35.576 "adrfam": "IPv4", 00:21:35.576 "traddr": "10.0.0.2", 00:21:35.576 "trsvcid": "4420" 00:21:35.576 } 00:21:35.576 ], 00:21:35.576 "allow_any_host": true, 00:21:35.576 "hosts": [], 00:21:35.576 "serial_number": "SPDK00000000000001", 00:21:35.576 "model_number": "SPDK bdev Controller", 00:21:35.576 "max_namespaces": 2, 00:21:35.576 "min_cntlid": 1, 00:21:35.576 "max_cntlid": 65519, 00:21:35.576 "namespaces": [ 00:21:35.576 { 00:21:35.576 "nsid": 1, 00:21:35.576 "bdev_name": "Malloc0", 00:21:35.576 "name": "Malloc0", 00:21:35.576 "nguid": "ACC02846963342CCBF63F9D1D94B21F9", 00:21:35.576 "uuid": "acc02846-9633-42cc-bf63-f9d1d94b21f9" 00:21:35.576 } 00:21:35.576 ] 00:21:35.576 } 00:21:35.576 ] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4029062 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:35.576 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 Malloc1 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 [ 00:21:35.835 { 00:21:35.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:35.835 "subtype": "Discovery", 00:21:35.835 "listen_addresses": [], 00:21:35.835 "allow_any_host": true, 00:21:35.835 "hosts": [] 00:21:35.835 }, 00:21:35.835 { 00:21:35.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.835 "subtype": "NVMe", 00:21:35.835 "listen_addresses": [ 00:21:35.835 { 00:21:35.835 "trtype": "TCP", 00:21:35.835 "adrfam": "IPv4", 00:21:35.835 "traddr": "10.0.0.2", 00:21:35.835 "trsvcid": "4420" 00:21:35.835 } 00:21:35.835 ], 00:21:35.835 "allow_any_host": true, 00:21:35.835 "hosts": [], 00:21:35.835 "serial_number": "SPDK00000000000001", 00:21:35.835 "model_number": "SPDK bdev Controller", 00:21:35.835 "max_namespaces": 2, 00:21:35.835 "min_cntlid": 1, 00:21:35.835 "max_cntlid": 65519, 00:21:35.835 "namespaces": [ 00:21:35.835 { 00:21:35.835 "nsid": 1, 00:21:35.835 "bdev_name": "Malloc0", 00:21:35.835 "name": "Malloc0", 00:21:35.835 "nguid": "ACC02846963342CCBF63F9D1D94B21F9", 00:21:35.835 "uuid": "acc02846-9633-42cc-bf63-f9d1d94b21f9" 00:21:35.835 }, 00:21:35.835 { 00:21:35.835 "nsid": 2, 00:21:35.835 "bdev_name": "Malloc1", 00:21:35.835 "name": "Malloc1", 00:21:35.835 "nguid": "80659C489912482B891386E8665CE8B4", 00:21:35.835 "uuid": "80659c48-9912-482b-8913-86e8665ce8b4" 00:21:35.835 } 00:21:35.835 ] 00:21:35.835 } 00:21:35.835 ] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4029062 00:21:35.835 Asynchronous Event Request test 00:21:35.835 Attaching to 10.0.0.2 00:21:35.835 Attached to 10.0.0.2 00:21:35.835 Registering asynchronous event callbacks... 00:21:35.835 Starting namespace attribute notice tests for all controllers... 00:21:35.835 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:35.835 aer_cb - Changed Namespace 00:21:35.835 Cleaning up... 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.835 rmmod nvme_tcp 00:21:35.835 rmmod nvme_fabrics 00:21:35.835 rmmod nvme_keyring 00:21:35.835 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4029027 ']' 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4029027 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 4029027 ']' 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 4029027 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4029027 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4029027' 00:21:36.094 killing process with pid 4029027 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 4029027 00:21:36.094 21:02:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 4029027 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.353 21:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.258 00:21:38.258 real 0m5.530s 00:21:38.258 user 0m4.496s 00:21:38.258 sys 0m1.976s 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.258 ************************************ 00:21:38.258 END TEST nvmf_aer 00:21:38.258 ************************************ 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.258 ************************************ 00:21:38.258 START TEST nvmf_async_init 00:21:38.258 ************************************ 00:21:38.258 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:38.517 * Looking for test storage... 00:21:38.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.517 --rc genhtml_branch_coverage=1 00:21:38.517 --rc genhtml_function_coverage=1 00:21:38.517 --rc genhtml_legend=1 00:21:38.517 --rc geninfo_all_blocks=1 00:21:38.517 --rc geninfo_unexecuted_blocks=1 00:21:38.517 00:21:38.517 ' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.517 --rc genhtml_branch_coverage=1 00:21:38.517 --rc genhtml_function_coverage=1 00:21:38.517 --rc genhtml_legend=1 00:21:38.517 --rc geninfo_all_blocks=1 00:21:38.517 --rc geninfo_unexecuted_blocks=1 00:21:38.517 00:21:38.517 ' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.517 --rc genhtml_branch_coverage=1 00:21:38.517 --rc genhtml_function_coverage=1 00:21:38.517 --rc genhtml_legend=1 00:21:38.517 --rc geninfo_all_blocks=1 00:21:38.517 --rc geninfo_unexecuted_blocks=1 00:21:38.517 00:21:38.517 ' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.517 --rc genhtml_branch_coverage=1 00:21:38.517 --rc genhtml_function_coverage=1 00:21:38.517 --rc genhtml_legend=1 00:21:38.517 --rc geninfo_all_blocks=1 00:21:38.517 --rc geninfo_unexecuted_blocks=1 00:21:38.517 00:21:38.517 ' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.517 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e4e3c73379b5465eb751f29377013d66 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.518 21:02:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:40.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:40.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:40.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:40.419 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:40.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:21:40.420 00:21:40.420 --- 10.0.0.2 ping statistics --- 00:21:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.420 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:21:40.420 00:21:40.420 --- 10.0.0.1 ping statistics --- 00:21:40.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.420 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.420 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4031121 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4031121 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 4031121 ']' 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.679 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.679 [2024-11-26 21:02:31.411585] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:40.679 [2024-11-26 21:02:31.411662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.679 [2024-11-26 21:02:31.481366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.679 [2024-11-26 21:02:31.536758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.679 [2024-11-26 21:02:31.536830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.679 [2024-11-26 21:02:31.536851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.679 [2024-11-26 21:02:31.536869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.679 [2024-11-26 21:02:31.536882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.679 [2024-11-26 21:02:31.537511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.937 [2024-11-26 21:02:31.687940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.937 null0 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.937 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e4e3c73379b5465eb751f29377013d66 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 [2024-11-26 21:02:31.728252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.938 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.196 nvme0n1 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.196 [ 00:21:41.196 { 00:21:41.196 "name": "nvme0n1", 00:21:41.196 "aliases": [ 00:21:41.196 "e4e3c733-79b5-465e-b751-f29377013d66" 00:21:41.196 ], 00:21:41.196 "product_name": "NVMe disk", 00:21:41.196 "block_size": 512, 00:21:41.196 "num_blocks": 2097152, 00:21:41.196 "uuid": "e4e3c733-79b5-465e-b751-f29377013d66", 00:21:41.196 "numa_id": 0, 00:21:41.196 "assigned_rate_limits": { 00:21:41.196 "rw_ios_per_sec": 0, 00:21:41.196 "rw_mbytes_per_sec": 0, 00:21:41.196 "r_mbytes_per_sec": 0, 00:21:41.196 "w_mbytes_per_sec": 0 00:21:41.196 }, 00:21:41.196 "claimed": false, 00:21:41.196 "zoned": false, 00:21:41.196 "supported_io_types": { 00:21:41.196 "read": true, 00:21:41.196 "write": true, 00:21:41.196 "unmap": false, 00:21:41.196 "flush": true, 00:21:41.196 "reset": true, 00:21:41.196 "nvme_admin": true, 00:21:41.196 "nvme_io": true, 00:21:41.196 "nvme_io_md": false, 00:21:41.196 "write_zeroes": true, 00:21:41.196 "zcopy": false, 00:21:41.196 "get_zone_info": false, 00:21:41.196 "zone_management": false, 00:21:41.196 "zone_append": false, 00:21:41.196 "compare": true, 00:21:41.196 "compare_and_write": true, 00:21:41.196 "abort": true, 00:21:41.196 "seek_hole": false, 00:21:41.196 "seek_data": false, 00:21:41.196 "copy": true, 00:21:41.196 "nvme_iov_md": false 00:21:41.196 }, 00:21:41.196 "memory_domains": [ 00:21:41.196 { 00:21:41.196 "dma_device_id": "system", 00:21:41.196 "dma_device_type": 1 00:21:41.196 } 00:21:41.196 ], 00:21:41.196 "driver_specific": { 00:21:41.196 "nvme": [ 00:21:41.196 { 00:21:41.196 "trid": { 00:21:41.196 "trtype": "TCP", 00:21:41.196 "adrfam": "IPv4", 00:21:41.196 "traddr": "10.0.0.2", 00:21:41.196 "trsvcid": "4420", 00:21:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:41.196 }, 00:21:41.196 "ctrlr_data": { 00:21:41.196 "cntlid": 1, 00:21:41.196 "vendor_id": "0x8086", 00:21:41.196 "model_number": "SPDK bdev Controller", 00:21:41.196 "serial_number": "00000000000000000000", 00:21:41.196 "firmware_revision": "25.01", 00:21:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.196 "oacs": { 00:21:41.196 "security": 0, 00:21:41.196 "format": 0, 00:21:41.196 "firmware": 0, 00:21:41.196 "ns_manage": 0 00:21:41.196 }, 00:21:41.196 "multi_ctrlr": true, 00:21:41.196 "ana_reporting": false 00:21:41.196 }, 00:21:41.196 "vs": { 00:21:41.196 "nvme_version": "1.3" 00:21:41.196 }, 00:21:41.196 "ns_data": { 00:21:41.196 "id": 1, 00:21:41.196 "can_share": true 00:21:41.196 } 00:21:41.196 } 00:21:41.196 ], 00:21:41.196 "mp_policy": "active_passive" 00:21:41.196 } 00:21:41.196 } 00:21:41.196 ] 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.196 21:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.196 [2024-11-26 21:02:31.981824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:41.196 [2024-11-26 21:02:31.981931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9b710 (9): Bad file descriptor 00:21:41.455 [2024-11-26 21:02:32.154853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 [ 00:21:41.455 { 00:21:41.455 "name": "nvme0n1", 00:21:41.455 "aliases": [ 00:21:41.455 "e4e3c733-79b5-465e-b751-f29377013d66" 00:21:41.455 ], 00:21:41.455 "product_name": "NVMe disk", 00:21:41.455 "block_size": 512, 00:21:41.455 "num_blocks": 2097152, 00:21:41.455 "uuid": "e4e3c733-79b5-465e-b751-f29377013d66", 00:21:41.455 "numa_id": 0, 00:21:41.455 "assigned_rate_limits": { 00:21:41.455 "rw_ios_per_sec": 0, 00:21:41.455 "rw_mbytes_per_sec": 0, 00:21:41.455 "r_mbytes_per_sec": 0, 00:21:41.455 "w_mbytes_per_sec": 0 00:21:41.455 }, 00:21:41.455 "claimed": false, 00:21:41.455 "zoned": false, 00:21:41.455 "supported_io_types": { 00:21:41.455 "read": true, 00:21:41.455 "write": true, 00:21:41.455 "unmap": false, 00:21:41.455 "flush": true, 00:21:41.455 "reset": true, 00:21:41.455 "nvme_admin": true, 00:21:41.455 "nvme_io": true, 00:21:41.455 "nvme_io_md": false, 00:21:41.455 "write_zeroes": true, 00:21:41.455 "zcopy": false, 00:21:41.455 "get_zone_info": false, 00:21:41.455 "zone_management": false, 00:21:41.455 "zone_append": false, 00:21:41.455 "compare": true, 00:21:41.455 "compare_and_write": true, 00:21:41.455 "abort": true, 00:21:41.455 "seek_hole": false, 00:21:41.455 "seek_data": false, 00:21:41.455 "copy": true, 00:21:41.455 "nvme_iov_md": false 00:21:41.455 }, 00:21:41.455 "memory_domains": [ 00:21:41.455 { 00:21:41.455 "dma_device_id": "system", 00:21:41.455 "dma_device_type": 1 00:21:41.455 } 00:21:41.455 ], 00:21:41.455 "driver_specific": { 00:21:41.455 "nvme": [ 00:21:41.455 { 00:21:41.455 "trid": { 00:21:41.455 "trtype": "TCP", 00:21:41.455 "adrfam": "IPv4", 00:21:41.455 "traddr": "10.0.0.2", 00:21:41.455 "trsvcid": "4420", 00:21:41.455 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:41.455 }, 00:21:41.455 "ctrlr_data": { 00:21:41.455 "cntlid": 2, 00:21:41.455 "vendor_id": "0x8086", 00:21:41.455 "model_number": "SPDK bdev Controller", 00:21:41.455 "serial_number": "00000000000000000000", 00:21:41.455 "firmware_revision": "25.01", 00:21:41.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.455 "oacs": { 00:21:41.455 "security": 0, 00:21:41.455 "format": 0, 00:21:41.455 "firmware": 0, 00:21:41.455 "ns_manage": 0 00:21:41.455 }, 00:21:41.455 "multi_ctrlr": true, 00:21:41.455 "ana_reporting": false 00:21:41.455 }, 00:21:41.455 "vs": { 00:21:41.455 "nvme_version": "1.3" 00:21:41.455 }, 00:21:41.455 "ns_data": { 00:21:41.455 "id": 1, 00:21:41.455 "can_share": true 00:21:41.455 } 00:21:41.455 } 00:21:41.455 ], 00:21:41.455 "mp_policy": "active_passive" 00:21:41.455 } 00:21:41.455 } 00:21:41.455 ] 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.m6QW6yufbT 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.m6QW6yufbT 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.m6QW6yufbT 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.455 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 [2024-11-26 21:02:32.214626] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.456 [2024-11-26 21:02:32.214805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 [2024-11-26 21:02:32.230680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.456 nvme0n1 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 [ 00:21:41.456 { 00:21:41.456 "name": "nvme0n1", 00:21:41.456 "aliases": [ 00:21:41.456 "e4e3c733-79b5-465e-b751-f29377013d66" 00:21:41.456 ], 00:21:41.456 "product_name": "NVMe disk", 00:21:41.456 "block_size": 512, 00:21:41.456 "num_blocks": 2097152, 00:21:41.456 "uuid": "e4e3c733-79b5-465e-b751-f29377013d66", 00:21:41.456 "numa_id": 0, 00:21:41.456 "assigned_rate_limits": { 00:21:41.456 "rw_ios_per_sec": 0, 00:21:41.456 "rw_mbytes_per_sec": 0, 00:21:41.456 "r_mbytes_per_sec": 0, 00:21:41.456 "w_mbytes_per_sec": 0 00:21:41.456 }, 00:21:41.456 "claimed": false, 00:21:41.456 "zoned": false, 00:21:41.456 "supported_io_types": { 00:21:41.456 "read": true, 00:21:41.456 "write": true, 00:21:41.456 "unmap": false, 00:21:41.456 "flush": true, 00:21:41.456 "reset": true, 00:21:41.456 "nvme_admin": true, 00:21:41.456 "nvme_io": true, 00:21:41.456 "nvme_io_md": false, 00:21:41.456 "write_zeroes": true, 00:21:41.456 "zcopy": false, 00:21:41.456 "get_zone_info": false, 00:21:41.456 "zone_management": false, 00:21:41.456 "zone_append": false, 00:21:41.456 "compare": true, 00:21:41.456 "compare_and_write": true, 00:21:41.456 "abort": true, 00:21:41.456 "seek_hole": false, 00:21:41.456 "seek_data": false, 00:21:41.456 "copy": true, 00:21:41.456 "nvme_iov_md": false 00:21:41.456 }, 00:21:41.456 "memory_domains": [ 00:21:41.456 { 00:21:41.456 "dma_device_id": "system", 00:21:41.456 "dma_device_type": 1 00:21:41.456 } 00:21:41.456 ], 00:21:41.456 "driver_specific": { 00:21:41.456 "nvme": [ 00:21:41.456 { 00:21:41.456 "trid": { 00:21:41.456 "trtype": "TCP", 00:21:41.456 "adrfam": "IPv4", 00:21:41.456 "traddr": "10.0.0.2", 00:21:41.456 "trsvcid": "4421", 00:21:41.456 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:41.456 }, 00:21:41.456 "ctrlr_data": { 00:21:41.456 "cntlid": 3, 00:21:41.456 "vendor_id": "0x8086", 00:21:41.456 "model_number": "SPDK bdev Controller", 00:21:41.456 "serial_number": "00000000000000000000", 00:21:41.456 "firmware_revision": "25.01", 00:21:41.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.456 "oacs": { 00:21:41.456 "security": 0, 00:21:41.456 "format": 0, 00:21:41.456 "firmware": 0, 00:21:41.456 "ns_manage": 0 00:21:41.456 }, 00:21:41.456 "multi_ctrlr": true, 00:21:41.456 "ana_reporting": false 00:21:41.456 }, 00:21:41.456 "vs": { 00:21:41.456 "nvme_version": "1.3" 00:21:41.456 }, 00:21:41.456 "ns_data": { 00:21:41.456 "id": 1, 00:21:41.456 "can_share": true 00:21:41.456 } 00:21:41.456 } 00:21:41.456 ], 00:21:41.456 "mp_policy": "active_passive" 00:21:41.456 } 00:21:41.456 } 00:21:41.456 ] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.m6QW6yufbT 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.456 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.456 rmmod nvme_tcp 00:21:41.456 rmmod nvme_fabrics 00:21:41.456 rmmod nvme_keyring 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4031121 ']' 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4031121 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 4031121 ']' 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 4031121 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4031121 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4031121' 00:21:41.715 killing process with pid 4031121 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 4031121 00:21:41.715 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 4031121 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.999 21:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.905 00:21:43.905 real 0m5.549s 00:21:43.905 user 0m2.219s 00:21:43.905 sys 0m1.746s 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.905 ************************************ 00:21:43.905 END TEST nvmf_async_init 00:21:43.905 ************************************ 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.905 ************************************ 00:21:43.905 START TEST dma 00:21:43.905 ************************************ 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:43.905 * Looking for test storage... 00:21:43.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:43.905 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.165 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.166 --rc genhtml_branch_coverage=1 00:21:44.166 --rc genhtml_function_coverage=1 00:21:44.166 --rc genhtml_legend=1 00:21:44.166 --rc geninfo_all_blocks=1 00:21:44.166 --rc geninfo_unexecuted_blocks=1 00:21:44.166 00:21:44.166 ' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.166 --rc genhtml_branch_coverage=1 00:21:44.166 --rc genhtml_function_coverage=1 00:21:44.166 --rc genhtml_legend=1 00:21:44.166 --rc geninfo_all_blocks=1 00:21:44.166 --rc geninfo_unexecuted_blocks=1 00:21:44.166 00:21:44.166 ' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.166 --rc genhtml_branch_coverage=1 00:21:44.166 --rc genhtml_function_coverage=1 00:21:44.166 --rc genhtml_legend=1 00:21:44.166 --rc geninfo_all_blocks=1 00:21:44.166 --rc geninfo_unexecuted_blocks=1 00:21:44.166 00:21:44.166 ' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.166 --rc genhtml_branch_coverage=1 00:21:44.166 --rc genhtml_function_coverage=1 00:21:44.166 --rc genhtml_legend=1 00:21:44.166 --rc geninfo_all_blocks=1 00:21:44.166 --rc geninfo_unexecuted_blocks=1 00:21:44.166 00:21:44.166 ' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:44.166 00:21:44.166 real 0m0.153s 00:21:44.166 user 0m0.107s 00:21:44.166 sys 0m0.055s 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:44.166 ************************************ 00:21:44.166 END TEST dma 00:21:44.166 ************************************ 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.166 ************************************ 00:21:44.166 START TEST nvmf_identify 00:21:44.166 ************************************ 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:44.166 * Looking for test storage... 00:21:44.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.166 21:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.166 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.167 --rc genhtml_branch_coverage=1 00:21:44.167 --rc genhtml_function_coverage=1 00:21:44.167 --rc genhtml_legend=1 00:21:44.167 --rc geninfo_all_blocks=1 00:21:44.167 --rc geninfo_unexecuted_blocks=1 00:21:44.167 00:21:44.167 ' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.167 --rc genhtml_branch_coverage=1 00:21:44.167 --rc genhtml_function_coverage=1 00:21:44.167 --rc genhtml_legend=1 00:21:44.167 --rc geninfo_all_blocks=1 00:21:44.167 --rc geninfo_unexecuted_blocks=1 00:21:44.167 00:21:44.167 ' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.167 --rc genhtml_branch_coverage=1 00:21:44.167 --rc genhtml_function_coverage=1 00:21:44.167 --rc genhtml_legend=1 00:21:44.167 --rc geninfo_all_blocks=1 00:21:44.167 --rc geninfo_unexecuted_blocks=1 00:21:44.167 00:21:44.167 ' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.167 --rc genhtml_branch_coverage=1 00:21:44.167 --rc genhtml_function_coverage=1 00:21:44.167 --rc genhtml_legend=1 00:21:44.167 --rc geninfo_all_blocks=1 00:21:44.167 --rc geninfo_unexecuted_blocks=1 00:21:44.167 00:21:44.167 ' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.167 21:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.699 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:46.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:46.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:46.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:46.700 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:46.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.701 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:21:46.702 00:21:46.702 --- 10.0.0.2 ping statistics --- 00:21:46.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.702 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:21:46.702 00:21:46.702 --- 10.0.0.1 ping statistics --- 00:21:46.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.702 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4033262 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4033262 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4033262 ']' 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.702 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.703 [2024-11-26 21:02:37.250125] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:46.703 [2024-11-26 21:02:37.250208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.703 [2024-11-26 21:02:37.329271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.703 [2024-11-26 21:02:37.393275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.703 [2024-11-26 21:02:37.393344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.703 [2024-11-26 21:02:37.393360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.703 [2024-11-26 21:02:37.393374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.703 [2024-11-26 21:02:37.393386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.703 [2024-11-26 21:02:37.395063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.703 [2024-11-26 21:02:37.395122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.703 [2024-11-26 21:02:37.395240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.703 [2024-11-26 21:02:37.395243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.703 [2024-11-26 21:02:37.522275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.703 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 Malloc0 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 [2024-11-26 21:02:37.612824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.704 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.704 [ 00:21:46.704 { 00:21:46.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:46.705 "subtype": "Discovery", 00:21:46.705 "listen_addresses": [ 00:21:46.705 { 00:21:46.705 "trtype": "TCP", 00:21:46.705 "adrfam": "IPv4", 00:21:46.705 "traddr": "10.0.0.2", 00:21:46.705 "trsvcid": "4420" 00:21:46.705 } 00:21:46.705 ], 00:21:46.705 "allow_any_host": true, 00:21:46.705 "hosts": [] 00:21:46.705 }, 00:21:46.705 { 00:21:46.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.705 "subtype": "NVMe", 00:21:46.705 "listen_addresses": [ 00:21:46.705 { 00:21:46.705 "trtype": "TCP", 00:21:46.705 "adrfam": "IPv4", 00:21:46.705 "traddr": "10.0.0.2", 00:21:46.705 "trsvcid": "4420" 00:21:46.705 } 00:21:46.705 ], 00:21:46.705 "allow_any_host": true, 00:21:46.705 "hosts": [], 00:21:46.969 "serial_number": "SPDK00000000000001", 00:21:46.969 "model_number": "SPDK bdev Controller", 00:21:46.969 "max_namespaces": 32, 00:21:46.969 "min_cntlid": 1, 00:21:46.969 "max_cntlid": 65519, 00:21:46.969 "namespaces": [ 00:21:46.969 { 00:21:46.969 "nsid": 1, 00:21:46.969 "bdev_name": "Malloc0", 00:21:46.969 "name": "Malloc0", 00:21:46.969 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:46.969 "eui64": "ABCDEF0123456789", 00:21:46.969 "uuid": "62cd5619-0adf-4075-a29e-b1c929da1771" 00:21:46.969 } 00:21:46.969 ] 00:21:46.969 } 00:21:46.969 ] 00:21:46.969 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.969 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:46.969 [2024-11-26 21:02:37.655855] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:46.969 [2024-11-26 21:02:37.655903] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033294 ] 00:21:46.969 [2024-11-26 21:02:37.705508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:46.969 [2024-11-26 21:02:37.705581] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:46.969 [2024-11-26 21:02:37.705591] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:46.969 [2024-11-26 21:02:37.705619] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:46.969 [2024-11-26 21:02:37.705635] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:46.969 [2024-11-26 21:02:37.710322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:46.969 [2024-11-26 21:02:37.710395] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe47690 0 00:21:46.969 [2024-11-26 21:02:37.720696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:46.969 [2024-11-26 21:02:37.720721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:46.970 [2024-11-26 21:02:37.720745] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:46.970 [2024-11-26 21:02:37.720752] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:46.970 [2024-11-26 21:02:37.720806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.720821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.720829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.720851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:46.970 [2024-11-26 21:02:37.720881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.728699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.728719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.728726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.728734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.728757] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:46.970 [2024-11-26 21:02:37.728772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:46.970 [2024-11-26 21:02:37.728782] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:46.970 [2024-11-26 21:02:37.728809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.728818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.728824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.728835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.728859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.728987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.729000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.729007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.729045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:46.970 [2024-11-26 21:02:37.729065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:46.970 [2024-11-26 21:02:37.729078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.729102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.729123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.729226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.729238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.729245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.729260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:46.970 [2024-11-26 21:02:37.729275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.729286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.729310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.729330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.729419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.729431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.729438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.729453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.729470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.729495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.729516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.729606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.729618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.729624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.729640] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:46.970 [2024-11-26 21:02:37.729649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.729661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.729800] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:46.970 [2024-11-26 21:02:37.729811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.729829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.729857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.729868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.729889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.730033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.730048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.730055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.730072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:46.970 [2024-11-26 21:02:37.730088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.730113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.730134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.730231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.730247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.730253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.970 [2024-11-26 21:02:37.730267] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:46.970 [2024-11-26 21:02:37.730275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:46.970 [2024-11-26 21:02:37.730288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:46.970 [2024-11-26 21:02:37.730305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:46.970 [2024-11-26 21:02:37.730324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.970 [2024-11-26 21:02:37.730342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.970 [2024-11-26 21:02:37.730362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.970 [2024-11-26 21:02:37.730506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:46.970 [2024-11-26 21:02:37.730518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:46.970 [2024-11-26 21:02:37.730529] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe47690): datao=0, datal=4096, cccid=0 00:21:46.970 [2024-11-26 21:02:37.730544] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea9100) on tqpair(0xe47690): expected_datao=0, payload_size=4096 00:21:46.970 [2024-11-26 21:02:37.730552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730569] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.730580] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.774715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.970 [2024-11-26 21:02:37.774734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.970 [2024-11-26 21:02:37.774741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.970 [2024-11-26 21:02:37.774748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.774762] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:46.971 [2024-11-26 21:02:37.774771] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:46.971 [2024-11-26 21:02:37.774778] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:46.971 [2024-11-26 21:02:37.774789] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:46.971 [2024-11-26 21:02:37.774796] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:46.971 [2024-11-26 21:02:37.774804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:46.971 [2024-11-26 21:02:37.774820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:46.971 [2024-11-26 21:02:37.774833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.774841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.774847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.774859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:46.971 [2024-11-26 21:02:37.774882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.971 [2024-11-26 21:02:37.775020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.971 [2024-11-26 21:02:37.775035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.971 [2024-11-26 21:02:37.775042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.775062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.971 [2024-11-26 21:02:37.775095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.971 [2024-11-26 21:02:37.775130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.971 [2024-11-26 21:02:37.775162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.971 [2024-11-26 21:02:37.775191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:46.971 [2024-11-26 21:02:37.775211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:46.971 [2024-11-26 21:02:37.775224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.971 [2024-11-26 21:02:37.775279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9100, cid 0, qid 0 00:21:46.971 [2024-11-26 21:02:37.775290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9280, cid 1, qid 0 00:21:46.971 [2024-11-26 21:02:37.775297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9400, cid 2, qid 0 00:21:46.971 [2024-11-26 21:02:37.775320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.971 [2024-11-26 21:02:37.775328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9700, cid 4, qid 0 00:21:46.971 [2024-11-26 21:02:37.775448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.971 [2024-11-26 21:02:37.775461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.971 [2024-11-26 21:02:37.775467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9700) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.775483] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:46.971 [2024-11-26 21:02:37.775492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:46.971 [2024-11-26 21:02:37.775509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.971 [2024-11-26 21:02:37.775549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9700, cid 4, qid 0 00:21:46.971 [2024-11-26 21:02:37.775646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:46.971 [2024-11-26 21:02:37.775658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:46.971 [2024-11-26 21:02:37.775679] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775693] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe47690): datao=0, datal=4096, cccid=4 00:21:46.971 [2024-11-26 21:02:37.775702] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea9700) on tqpair(0xe47690): expected_datao=0, payload_size=4096 00:21:46.971 [2024-11-26 21:02:37.775713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775741] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.971 [2024-11-26 21:02:37.775773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.971 [2024-11-26 21:02:37.775779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9700) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.775808] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:46.971 [2024-11-26 21:02:37.775852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.971 [2024-11-26 21:02:37.775886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.775900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.775908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.971 [2024-11-26 21:02:37.775937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9700, cid 4, qid 0 00:21:46.971 [2024-11-26 21:02:37.775949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9880, cid 5, qid 0 00:21:46.971 [2024-11-26 21:02:37.776110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:46.971 [2024-11-26 21:02:37.776122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:46.971 [2024-11-26 21:02:37.776129] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.776135] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe47690): datao=0, datal=1024, cccid=4 00:21:46.971 [2024-11-26 21:02:37.776142] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea9700) on tqpair(0xe47690): expected_datao=0, payload_size=1024 00:21:46.971 [2024-11-26 21:02:37.776149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.776158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.776165] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.776173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.971 [2024-11-26 21:02:37.776182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.971 [2024-11-26 21:02:37.776188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.776194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9880) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.816791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.971 [2024-11-26 21:02:37.816809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.971 [2024-11-26 21:02:37.816817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.816823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9700) on tqpair=0xe47690 00:21:46.971 [2024-11-26 21:02:37.816841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.971 [2024-11-26 21:02:37.816851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe47690) 00:21:46.971 [2024-11-26 21:02:37.816861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.972 [2024-11-26 21:02:37.816895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9700, cid 4, qid 0 00:21:46.972 [2024-11-26 21:02:37.817008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:46.972 [2024-11-26 21:02:37.817021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:46.972 [2024-11-26 21:02:37.817027] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.817033] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe47690): datao=0, datal=3072, cccid=4 00:21:46.972 [2024-11-26 21:02:37.817041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea9700) on tqpair(0xe47690): expected_datao=0, payload_size=3072 00:21:46.972 [2024-11-26 21:02:37.817048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.817067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.817076] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.857797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.972 [2024-11-26 21:02:37.857815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.972 [2024-11-26 21:02:37.857823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.857830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9700) on tqpair=0xe47690 00:21:46.972 [2024-11-26 21:02:37.857848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.857857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe47690) 00:21:46.972 [2024-11-26 21:02:37.857868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.972 [2024-11-26 21:02:37.857897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9700, cid 4, qid 0 00:21:46.972 [2024-11-26 21:02:37.858022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:46.972 [2024-11-26 21:02:37.858035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:46.972 [2024-11-26 21:02:37.858042] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.858048] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe47690): datao=0, datal=8, cccid=4 00:21:46.972 [2024-11-26 21:02:37.858055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea9700) on tqpair(0xe47690): expected_datao=0, payload_size=8 00:21:46.972 [2024-11-26 21:02:37.858062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.858071] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.858078] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.902702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.972 [2024-11-26 21:02:37.902720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.972 [2024-11-26 21:02:37.902727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.972 [2024-11-26 21:02:37.902734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9700) on tqpair=0xe47690 00:21:46.972 ===================================================== 00:21:46.972 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:46.972 ===================================================== 00:21:46.972 Controller Capabilities/Features 00:21:46.972 ================================ 00:21:46.972 Vendor ID: 0000 00:21:46.972 Subsystem Vendor ID: 0000 00:21:46.972 Serial Number: .................... 00:21:46.972 Model Number: ........................................ 00:21:46.972 Firmware Version: 25.01 00:21:46.972 Recommended Arb Burst: 0 00:21:46.972 IEEE OUI Identifier: 00 00 00 00:21:46.972 Multi-path I/O 00:21:46.972 May have multiple subsystem ports: No 00:21:46.972 May have multiple controllers: No 00:21:46.972 Associated with SR-IOV VF: No 00:21:46.972 Max Data Transfer Size: 131072 00:21:46.972 Max Number of Namespaces: 0 00:21:46.972 Max Number of I/O Queues: 1024 00:21:46.972 NVMe Specification Version (VS): 1.3 00:21:46.972 NVMe Specification Version (Identify): 1.3 00:21:46.972 Maximum Queue Entries: 128 00:21:46.972 Contiguous Queues Required: Yes 00:21:46.972 Arbitration Mechanisms Supported 00:21:46.972 Weighted Round Robin: Not Supported 00:21:46.972 Vendor Specific: Not Supported 00:21:46.972 Reset Timeout: 15000 ms 00:21:46.972 Doorbell Stride: 4 bytes 00:21:46.972 NVM Subsystem Reset: Not Supported 00:21:46.972 Command Sets Supported 00:21:46.972 NVM Command Set: Supported 00:21:46.972 Boot Partition: Not Supported 00:21:46.972 Memory Page Size Minimum: 4096 bytes 00:21:46.972 Memory Page Size Maximum: 4096 bytes 00:21:46.972 Persistent Memory Region: Not Supported 00:21:46.972 Optional Asynchronous Events Supported 00:21:46.972 Namespace Attribute Notices: Not Supported 00:21:46.972 Firmware Activation Notices: Not Supported 00:21:46.972 ANA Change Notices: Not Supported 00:21:46.972 PLE Aggregate Log Change Notices: Not Supported 00:21:46.972 LBA Status Info Alert Notices: Not Supported 00:21:46.972 EGE Aggregate Log Change Notices: Not Supported 00:21:46.972 Normal NVM Subsystem Shutdown event: Not Supported 00:21:46.972 Zone Descriptor Change Notices: Not Supported 00:21:46.972 Discovery Log Change Notices: Supported 00:21:46.972 Controller Attributes 00:21:46.972 128-bit Host Identifier: Not Supported 00:21:46.972 Non-Operational Permissive Mode: Not Supported 00:21:46.972 NVM Sets: Not Supported 00:21:46.972 Read Recovery Levels: Not Supported 00:21:46.972 Endurance Groups: Not Supported 00:21:46.972 Predictable Latency Mode: Not Supported 00:21:46.972 Traffic Based Keep ALive: Not Supported 00:21:46.972 Namespace Granularity: Not Supported 00:21:46.972 SQ Associations: Not Supported 00:21:46.972 UUID List: Not Supported 00:21:46.972 Multi-Domain Subsystem: Not Supported 00:21:46.972 Fixed Capacity Management: Not Supported 00:21:46.972 Variable Capacity Management: Not Supported 00:21:46.972 Delete Endurance Group: Not Supported 00:21:46.972 Delete NVM Set: Not Supported 00:21:46.972 Extended LBA Formats Supported: Not Supported 00:21:46.972 Flexible Data Placement Supported: Not Supported 00:21:46.972 00:21:46.972 Controller Memory Buffer Support 00:21:46.972 ================================ 00:21:46.972 Supported: No 00:21:46.972 00:21:46.972 Persistent Memory Region Support 00:21:46.972 ================================ 00:21:46.972 Supported: No 00:21:46.972 00:21:46.972 Admin Command Set Attributes 00:21:46.972 ============================ 00:21:46.972 Security Send/Receive: Not Supported 00:21:46.972 Format NVM: Not Supported 00:21:46.972 Firmware Activate/Download: Not Supported 00:21:46.972 Namespace Management: Not Supported 00:21:46.972 Device Self-Test: Not Supported 00:21:46.972 Directives: Not Supported 00:21:46.972 NVMe-MI: Not Supported 00:21:46.972 Virtualization Management: Not Supported 00:21:46.972 Doorbell Buffer Config: Not Supported 00:21:46.972 Get LBA Status Capability: Not Supported 00:21:46.972 Command & Feature Lockdown Capability: Not Supported 00:21:46.972 Abort Command Limit: 1 00:21:46.972 Async Event Request Limit: 4 00:21:46.972 Number of Firmware Slots: N/A 00:21:46.972 Firmware Slot 1 Read-Only: N/A 00:21:46.972 Firmware Activation Without Reset: N/A 00:21:46.972 Multiple Update Detection Support: N/A 00:21:46.972 Firmware Update Granularity: No Information Provided 00:21:46.972 Per-Namespace SMART Log: No 00:21:46.972 Asymmetric Namespace Access Log Page: Not Supported 00:21:46.972 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:46.972 Command Effects Log Page: Not Supported 00:21:46.972 Get Log Page Extended Data: Supported 00:21:46.972 Telemetry Log Pages: Not Supported 00:21:46.972 Persistent Event Log Pages: Not Supported 00:21:46.972 Supported Log Pages Log Page: May Support 00:21:46.972 Commands Supported & Effects Log Page: Not Supported 00:21:46.972 Feature Identifiers & Effects Log Page:May Support 00:21:46.972 NVMe-MI Commands & Effects Log Page: May Support 00:21:46.972 Data Area 4 for Telemetry Log: Not Supported 00:21:46.972 Error Log Page Entries Supported: 128 00:21:46.972 Keep Alive: Not Supported 00:21:46.972 00:21:46.972 NVM Command Set Attributes 00:21:46.972 ========================== 00:21:46.972 Submission Queue Entry Size 00:21:46.972 Max: 1 00:21:46.972 Min: 1 00:21:46.972 Completion Queue Entry Size 00:21:46.972 Max: 1 00:21:46.972 Min: 1 00:21:46.972 Number of Namespaces: 0 00:21:46.972 Compare Command: Not Supported 00:21:46.972 Write Uncorrectable Command: Not Supported 00:21:46.972 Dataset Management Command: Not Supported 00:21:46.972 Write Zeroes Command: Not Supported 00:21:46.972 Set Features Save Field: Not Supported 00:21:46.972 Reservations: Not Supported 00:21:46.972 Timestamp: Not Supported 00:21:46.972 Copy: Not Supported 00:21:46.972 Volatile Write Cache: Not Present 00:21:46.972 Atomic Write Unit (Normal): 1 00:21:46.972 Atomic Write Unit (PFail): 1 00:21:46.972 Atomic Compare & Write Unit: 1 00:21:46.972 Fused Compare & Write: Supported 00:21:46.972 Scatter-Gather List 00:21:46.972 SGL Command Set: Supported 00:21:46.972 SGL Keyed: Supported 00:21:46.972 SGL Bit Bucket Descriptor: Not Supported 00:21:46.972 SGL Metadata Pointer: Not Supported 00:21:46.972 Oversized SGL: Not Supported 00:21:46.972 SGL Metadata Address: Not Supported 00:21:46.972 SGL Offset: Supported 00:21:46.973 Transport SGL Data Block: Not Supported 00:21:46.973 Replay Protected Memory Block: Not Supported 00:21:46.973 00:21:46.973 Firmware Slot Information 00:21:46.973 ========================= 00:21:46.973 Active slot: 0 00:21:46.973 00:21:46.973 00:21:46.973 Error Log 00:21:46.973 ========= 00:21:46.973 00:21:46.973 Active Namespaces 00:21:46.973 ================= 00:21:46.973 Discovery Log Page 00:21:46.973 ================== 00:21:46.973 Generation Counter: 2 00:21:46.973 Number of Records: 2 00:21:46.973 Record Format: 0 00:21:46.973 00:21:46.973 Discovery Log Entry 0 00:21:46.973 ---------------------- 00:21:46.973 Transport Type: 3 (TCP) 00:21:46.973 Address Family: 1 (IPv4) 00:21:46.973 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:46.973 Entry Flags: 00:21:46.973 Duplicate Returned Information: 1 00:21:46.973 Explicit Persistent Connection Support for Discovery: 1 00:21:46.973 Transport Requirements: 00:21:46.973 Secure Channel: Not Required 00:21:46.973 Port ID: 0 (0x0000) 00:21:46.973 Controller ID: 65535 (0xffff) 00:21:46.973 Admin Max SQ Size: 128 00:21:46.973 Transport Service Identifier: 4420 00:21:46.973 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:46.973 Transport Address: 10.0.0.2 00:21:46.973 Discovery Log Entry 1 00:21:46.973 ---------------------- 00:21:46.973 Transport Type: 3 (TCP) 00:21:46.973 Address Family: 1 (IPv4) 00:21:46.973 Subsystem Type: 2 (NVM Subsystem) 00:21:46.973 Entry Flags: 00:21:46.973 Duplicate Returned Information: 0 00:21:46.973 Explicit Persistent Connection Support for Discovery: 0 00:21:46.973 Transport Requirements: 00:21:46.973 Secure Channel: Not Required 00:21:46.973 Port ID: 0 (0x0000) 00:21:46.973 Controller ID: 65535 (0xffff) 00:21:46.973 Admin Max SQ Size: 128 00:21:46.973 Transport Service Identifier: 4420 00:21:46.973 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:46.973 Transport Address: 10.0.0.2 [2024-11-26 21:02:37.902865] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:46.973 [2024-11-26 21:02:37.902890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9100) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.902905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.973 [2024-11-26 21:02:37.902914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9280) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.902922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.973 [2024-11-26 21:02:37.902930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9400) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.902941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.973 [2024-11-26 21:02:37.902950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.902957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.973 [2024-11-26 21:02:37.902971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.902979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.902986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.902997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.903022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.903128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.903140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.903147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.903166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.903191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.903217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.903349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.903364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.903371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.903388] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:46.973 [2024-11-26 21:02:37.903397] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:46.973 [2024-11-26 21:02:37.903413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.903439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.903459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.903563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.903575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.903582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.903606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.903632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.903673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.903796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.903813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.903820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.903843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.903859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.903869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.903890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.903986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.903999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.904006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.904028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.904069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.904089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.973 [2024-11-26 21:02:37.904179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.973 [2024-11-26 21:02:37.904191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.973 [2024-11-26 21:02:37.904198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.973 [2024-11-26 21:02:37.904219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.973 [2024-11-26 21:02:37.904235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.973 [2024-11-26 21:02:37.904244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.973 [2024-11-26 21:02:37.904264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.974 [2024-11-26 21:02:37.904376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.974 [2024-11-26 21:02:37.904389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.974 [2024-11-26 21:02:37.904395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.974 [2024-11-26 21:02:37.904418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.974 [2024-11-26 21:02:37.904444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.974 [2024-11-26 21:02:37.904465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.974 [2024-11-26 21:02:37.904563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.974 [2024-11-26 21:02:37.904576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.974 [2024-11-26 21:02:37.904583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.974 [2024-11-26 21:02:37.904605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.974 [2024-11-26 21:02:37.904631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.974 [2024-11-26 21:02:37.904651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:46.974 [2024-11-26 21:02:37.904766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:46.974 [2024-11-26 21:02:37.904782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:46.974 [2024-11-26 21:02:37.904789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:46.974 [2024-11-26 21:02:37.904812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:46.974 [2024-11-26 21:02:37.904828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:46.974 [2024-11-26 21:02:37.904838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.974 [2024-11-26 21:02:37.904859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.904961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.904973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.904980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.904987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.905028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.905049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.905160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.905172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.905178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.905226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.905247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.905357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.905377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.905385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.905434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.905454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.905549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.905564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.905571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.905619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.905640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.905746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.905761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.905768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.905817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.905838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.905937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.905949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.905956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.905978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.905994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.906004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.906023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.906120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.906132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.906143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.906166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.906208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.234 [2024-11-26 21:02:37.906228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.234 [2024-11-26 21:02:37.906336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.234 [2024-11-26 21:02:37.906349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.234 [2024-11-26 21:02:37.906356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.234 [2024-11-26 21:02:37.906378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.234 [2024-11-26 21:02:37.906394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.234 [2024-11-26 21:02:37.906405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:37.906425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.235 [2024-11-26 21:02:37.906517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:37.906533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:37.906539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.906546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.235 [2024-11-26 21:02:37.906562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.906572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.906578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.235 [2024-11-26 21:02:37.906588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:37.906608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.235 [2024-11-26 21:02:37.910698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:37.910714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:37.910721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.910728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.235 [2024-11-26 21:02:37.910745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.910755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.910761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe47690) 00:21:47.235 [2024-11-26 21:02:37.910771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:37.910793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea9580, cid 3, qid 0 00:21:47.235 [2024-11-26 21:02:37.910911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:37.910927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:37.910934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:37.910945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea9580) on tqpair=0xe47690 00:21:47.235 [2024-11-26 21:02:37.910959] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:47.235 00:21:47.235 21:02:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:47.235 [2024-11-26 21:02:37.947769] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:47.235 [2024-11-26 21:02:37.947809] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033297 ] 00:21:47.235 [2024-11-26 21:02:37.998205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:47.235 [2024-11-26 21:02:37.998253] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:47.235 [2024-11-26 21:02:37.998263] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:47.235 [2024-11-26 21:02:37.998281] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:47.235 [2024-11-26 21:02:37.998293] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:47.235 [2024-11-26 21:02:37.998756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:47.235 [2024-11-26 21:02:37.998796] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61c690 0 00:21:47.235 [2024-11-26 21:02:38.004699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:47.235 [2024-11-26 21:02:38.004718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:47.235 [2024-11-26 21:02:38.004726] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:47.235 [2024-11-26 21:02:38.004732] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:47.235 [2024-11-26 21:02:38.004781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.004795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.004802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.235 [2024-11-26 21:02:38.004816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:47.235 [2024-11-26 21:02:38.004842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.235 [2024-11-26 21:02:38.012715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:38.012733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:38.012740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.012747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.235 [2024-11-26 21:02:38.012779] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:47.235 [2024-11-26 21:02:38.012792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:47.235 [2024-11-26 21:02:38.012802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:47.235 [2024-11-26 21:02:38.012822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.012831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.012838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.235 [2024-11-26 21:02:38.012853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:38.012878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.235 [2024-11-26 21:02:38.013024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:38.013040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:38.013047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.235 [2024-11-26 21:02:38.013066] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:47.235 [2024-11-26 21:02:38.013081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:47.235 [2024-11-26 21:02:38.013094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.235 [2024-11-26 21:02:38.013119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:38.013141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.235 [2024-11-26 21:02:38.013239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:38.013252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:38.013259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.235 [2024-11-26 21:02:38.013274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:47.235 [2024-11-26 21:02:38.013288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:47.235 [2024-11-26 21:02:38.013300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.235 [2024-11-26 21:02:38.013324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:38.013345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.235 [2024-11-26 21:02:38.013439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:38.013452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:38.013459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.235 [2024-11-26 21:02:38.013474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:47.235 [2024-11-26 21:02:38.013491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.235 [2024-11-26 21:02:38.013517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.235 [2024-11-26 21:02:38.013537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.235 [2024-11-26 21:02:38.013638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.235 [2024-11-26 21:02:38.013653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.235 [2024-11-26 21:02:38.013660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.235 [2024-11-26 21:02:38.013667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.235 [2024-11-26 21:02:38.013675] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:47.235 [2024-11-26 21:02:38.013684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:47.235 [2024-11-26 21:02:38.013707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:47.235 [2024-11-26 21:02:38.013816] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:47.235 [2024-11-26 21:02:38.013825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:47.235 [2024-11-26 21:02:38.013836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.013844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.013850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.013860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-11-26 21:02:38.013883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.236 [2024-11-26 21:02:38.014011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.014026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.014034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.014049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:47.236 [2024-11-26 21:02:38.014066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.014091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-11-26 21:02:38.014113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.236 [2024-11-26 21:02:38.014211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.014226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.014233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.014248] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:47.236 [2024-11-26 21:02:38.014256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.014270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:47.236 [2024-11-26 21:02:38.014289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.014306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.014325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-11-26 21:02:38.014347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.236 [2024-11-26 21:02:38.014483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.236 [2024-11-26 21:02:38.014496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.236 [2024-11-26 21:02:38.014503] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014509] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=4096, cccid=0 00:21:47.236 [2024-11-26 21:02:38.014516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e100) on tqpair(0x61c690): expected_datao=0, payload_size=4096 00:21:47.236 [2024-11-26 21:02:38.014524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014540] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.014549] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.054809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.054828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.054836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.054842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.054854] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:47.236 [2024-11-26 21:02:38.054863] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:47.236 [2024-11-26 21:02:38.054871] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:47.236 [2024-11-26 21:02:38.054877] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:47.236 [2024-11-26 21:02:38.054885] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:47.236 [2024-11-26 21:02:38.054893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.054907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.054919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.054927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.054933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.054944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.236 [2024-11-26 21:02:38.054967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.236 [2024-11-26 21:02:38.055074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.055087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.055094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.055111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.236 [2024-11-26 21:02:38.055154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.236 [2024-11-26 21:02:38.055185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.236 [2024-11-26 21:02:38.055216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.236 [2024-11-26 21:02:38.055262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.055281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.055294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.236 [2024-11-26 21:02:38.055334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e100, cid 0, qid 0 00:21:47.236 [2024-11-26 21:02:38.055360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e280, cid 1, qid 0 00:21:47.236 [2024-11-26 21:02:38.055368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e400, cid 2, qid 0 00:21:47.236 [2024-11-26 21:02:38.055376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.236 [2024-11-26 21:02:38.055384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.236 [2024-11-26 21:02:38.055545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.055562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.055569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.055584] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:47.236 [2024-11-26 21:02:38.055593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.055611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.055624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:47.236 [2024-11-26 21:02:38.055638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.236 [2024-11-26 21:02:38.055663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.236 [2024-11-26 21:02:38.055700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.236 [2024-11-26 21:02:38.055798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.236 [2024-11-26 21:02:38.055811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.236 [2024-11-26 21:02:38.055817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.236 [2024-11-26 21:02:38.055824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.236 [2024-11-26 21:02:38.055895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.055916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.055931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.055939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.237 [2024-11-26 21:02:38.055949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-11-26 21:02:38.055971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.237 [2024-11-26 21:02:38.056163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.237 [2024-11-26 21:02:38.056179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.237 [2024-11-26 21:02:38.056186] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.056193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=4096, cccid=4 00:21:47.237 [2024-11-26 21:02:38.056200] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e700) on tqpair(0x61c690): expected_datao=0, payload_size=4096 00:21:47.237 [2024-11-26 21:02:38.056207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.056217] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.056225] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.100704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.237 [2024-11-26 21:02:38.100724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.237 [2024-11-26 21:02:38.100732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.100739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.237 [2024-11-26 21:02:38.100762] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:47.237 [2024-11-26 21:02:38.100780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.100800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.100814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.100822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.237 [2024-11-26 21:02:38.100834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-11-26 21:02:38.100858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.237 [2024-11-26 21:02:38.101019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.237 [2024-11-26 21:02:38.101033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.237 [2024-11-26 21:02:38.101040] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.101046] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=4096, cccid=4 00:21:47.237 [2024-11-26 21:02:38.101054] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e700) on tqpair(0x61c690): expected_datao=0, payload_size=4096 00:21:47.237 [2024-11-26 21:02:38.101061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.101078] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.101087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.141798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.237 [2024-11-26 21:02:38.141817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.237 [2024-11-26 21:02:38.141825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.141832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.237 [2024-11-26 21:02:38.141851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.141869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:47.237 [2024-11-26 21:02:38.141884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.141892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.237 [2024-11-26 21:02:38.141903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.237 [2024-11-26 21:02:38.141926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.237 [2024-11-26 21:02:38.142042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.237 [2024-11-26 21:02:38.142055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.237 [2024-11-26 21:02:38.142062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.142068] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=4096, cccid=4 00:21:47.237 [2024-11-26 21:02:38.142076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e700) on tqpair(0x61c690): expected_datao=0, payload_size=4096 00:21:47.237 [2024-11-26 21:02:38.142083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.142099] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.237 [2024-11-26 21:02:38.142108] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.182815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.497 [2024-11-26 21:02:38.182836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.497 [2024-11-26 21:02:38.182845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.182852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.497 [2024-11-26 21:02:38.182872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182950] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:47.497 [2024-11-26 21:02:38.182957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:47.497 [2024-11-26 21:02:38.182966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:47.497 [2024-11-26 21:02:38.182985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.182994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.497 [2024-11-26 21:02:38.183006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.497 [2024-11-26 21:02:38.183017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.183024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.183030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61c690) 00:21:47.497 [2024-11-26 21:02:38.183039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.497 [2024-11-26 21:02:38.183067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.497 [2024-11-26 21:02:38.183079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e880, cid 5, qid 0 00:21:47.497 [2024-11-26 21:02:38.183201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.497 [2024-11-26 21:02:38.183215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.497 [2024-11-26 21:02:38.183222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.183228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.497 [2024-11-26 21:02:38.183238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.497 [2024-11-26 21:02:38.183248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.497 [2024-11-26 21:02:38.183255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.183262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e880) on tqpair=0x61c690 00:21:47.497 [2024-11-26 21:02:38.183277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.497 [2024-11-26 21:02:38.183286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61c690) 00:21:47.497 [2024-11-26 21:02:38.183297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.183318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e880, cid 5, qid 0 00:21:47.498 [2024-11-26 21:02:38.186697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.186715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.186723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.186730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e880) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.186748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.186757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.186768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.186795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e880, cid 5, qid 0 00:21:47.498 [2024-11-26 21:02:38.186912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.186925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.186932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.186939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e880) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.186954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.186963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.186973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.186994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e880, cid 5, qid 0 00:21:47.498 [2024-11-26 21:02:38.187089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.187101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.187108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e880) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.187139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.187161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.187173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.187190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.187202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.187219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.187231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61c690) 00:21:47.498 [2024-11-26 21:02:38.187248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.498 [2024-11-26 21:02:38.187270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e880, cid 5, qid 0 00:21:47.498 [2024-11-26 21:02:38.187296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e700, cid 4, qid 0 00:21:47.498 [2024-11-26 21:02:38.187304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67ea00, cid 6, qid 0 00:21:47.498 [2024-11-26 21:02:38.187312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67eb80, cid 7, qid 0 00:21:47.498 [2024-11-26 21:02:38.187580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.498 [2024-11-26 21:02:38.187594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.498 [2024-11-26 21:02:38.187601] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187607] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=8192, cccid=5 00:21:47.498 [2024-11-26 21:02:38.187615] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e880) on tqpair(0x61c690): expected_datao=0, payload_size=8192 00:21:47.498 [2024-11-26 21:02:38.187626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187645] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187655] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.498 [2024-11-26 21:02:38.187677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.498 [2024-11-26 21:02:38.187683] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187700] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=512, cccid=4 00:21:47.498 [2024-11-26 21:02:38.187707] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67e700) on tqpair(0x61c690): expected_datao=0, payload_size=512 00:21:47.498 [2024-11-26 21:02:38.187714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187724] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187731] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.498 [2024-11-26 21:02:38.187748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.498 [2024-11-26 21:02:38.187755] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=512, cccid=6 00:21:47.498 [2024-11-26 21:02:38.187768] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67ea00) on tqpair(0x61c690): expected_datao=0, payload_size=512 00:21:47.498 [2024-11-26 21:02:38.187775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187784] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.498 [2024-11-26 21:02:38.187809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.498 [2024-11-26 21:02:38.187815] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187821] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61c690): datao=0, datal=4096, cccid=7 00:21:47.498 [2024-11-26 21:02:38.187828] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x67eb80) on tqpair(0x61c690): expected_datao=0, payload_size=4096 00:21:47.498 [2024-11-26 21:02:38.187835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187845] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187852] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.187873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.187880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e880) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.187908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.187920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.187927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e700) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.187949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.187961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.187967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.187974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67ea00) on tqpair=0x61c690 00:21:47.498 [2024-11-26 21:02:38.188001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.498 [2024-11-26 21:02:38.188013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.498 [2024-11-26 21:02:38.188019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.498 [2024-11-26 21:02:38.188026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67eb80) on tqpair=0x61c690 00:21:47.498 ===================================================== 00:21:47.498 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.498 ===================================================== 00:21:47.498 Controller Capabilities/Features 00:21:47.498 ================================ 00:21:47.498 Vendor ID: 8086 00:21:47.498 Subsystem Vendor ID: 8086 00:21:47.498 Serial Number: SPDK00000000000001 00:21:47.498 Model Number: SPDK bdev Controller 00:21:47.498 Firmware Version: 25.01 00:21:47.498 Recommended Arb Burst: 6 00:21:47.498 IEEE OUI Identifier: e4 d2 5c 00:21:47.498 Multi-path I/O 00:21:47.498 May have multiple subsystem ports: Yes 00:21:47.498 May have multiple controllers: Yes 00:21:47.498 Associated with SR-IOV VF: No 00:21:47.498 Max Data Transfer Size: 131072 00:21:47.498 Max Number of Namespaces: 32 00:21:47.498 Max Number of I/O Queues: 127 00:21:47.498 NVMe Specification Version (VS): 1.3 00:21:47.498 NVMe Specification Version (Identify): 1.3 00:21:47.498 Maximum Queue Entries: 128 00:21:47.498 Contiguous Queues Required: Yes 00:21:47.498 Arbitration Mechanisms Supported 00:21:47.498 Weighted Round Robin: Not Supported 00:21:47.498 Vendor Specific: Not Supported 00:21:47.498 Reset Timeout: 15000 ms 00:21:47.498 Doorbell Stride: 4 bytes 00:21:47.498 NVM Subsystem Reset: Not Supported 00:21:47.498 Command Sets Supported 00:21:47.498 NVM Command Set: Supported 00:21:47.498 Boot Partition: Not Supported 00:21:47.498 Memory Page Size Minimum: 4096 bytes 00:21:47.498 Memory Page Size Maximum: 4096 bytes 00:21:47.498 Persistent Memory Region: Not Supported 00:21:47.498 Optional Asynchronous Events Supported 00:21:47.498 Namespace Attribute Notices: Supported 00:21:47.498 Firmware Activation Notices: Not Supported 00:21:47.499 ANA Change Notices: Not Supported 00:21:47.499 PLE Aggregate Log Change Notices: Not Supported 00:21:47.499 LBA Status Info Alert Notices: Not Supported 00:21:47.499 EGE Aggregate Log Change Notices: Not Supported 00:21:47.499 Normal NVM Subsystem Shutdown event: Not Supported 00:21:47.499 Zone Descriptor Change Notices: Not Supported 00:21:47.499 Discovery Log Change Notices: Not Supported 00:21:47.499 Controller Attributes 00:21:47.499 128-bit Host Identifier: Supported 00:21:47.499 Non-Operational Permissive Mode: Not Supported 00:21:47.499 NVM Sets: Not Supported 00:21:47.499 Read Recovery Levels: Not Supported 00:21:47.499 Endurance Groups: Not Supported 00:21:47.499 Predictable Latency Mode: Not Supported 00:21:47.499 Traffic Based Keep ALive: Not Supported 00:21:47.499 Namespace Granularity: Not Supported 00:21:47.499 SQ Associations: Not Supported 00:21:47.499 UUID List: Not Supported 00:21:47.499 Multi-Domain Subsystem: Not Supported 00:21:47.499 Fixed Capacity Management: Not Supported 00:21:47.499 Variable Capacity Management: Not Supported 00:21:47.499 Delete Endurance Group: Not Supported 00:21:47.499 Delete NVM Set: Not Supported 00:21:47.499 Extended LBA Formats Supported: Not Supported 00:21:47.499 Flexible Data Placement Supported: Not Supported 00:21:47.499 00:21:47.499 Controller Memory Buffer Support 00:21:47.499 ================================ 00:21:47.499 Supported: No 00:21:47.499 00:21:47.499 Persistent Memory Region Support 00:21:47.499 ================================ 00:21:47.499 Supported: No 00:21:47.499 00:21:47.499 Admin Command Set Attributes 00:21:47.499 ============================ 00:21:47.499 Security Send/Receive: Not Supported 00:21:47.499 Format NVM: Not Supported 00:21:47.499 Firmware Activate/Download: Not Supported 00:21:47.499 Namespace Management: Not Supported 00:21:47.499 Device Self-Test: Not Supported 00:21:47.499 Directives: Not Supported 00:21:47.499 NVMe-MI: Not Supported 00:21:47.499 Virtualization Management: Not Supported 00:21:47.499 Doorbell Buffer Config: Not Supported 00:21:47.499 Get LBA Status Capability: Not Supported 00:21:47.499 Command & Feature Lockdown Capability: Not Supported 00:21:47.499 Abort Command Limit: 4 00:21:47.499 Async Event Request Limit: 4 00:21:47.499 Number of Firmware Slots: N/A 00:21:47.499 Firmware Slot 1 Read-Only: N/A 00:21:47.499 Firmware Activation Without Reset: N/A 00:21:47.499 Multiple Update Detection Support: N/A 00:21:47.499 Firmware Update Granularity: No Information Provided 00:21:47.499 Per-Namespace SMART Log: No 00:21:47.499 Asymmetric Namespace Access Log Page: Not Supported 00:21:47.499 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:47.499 Command Effects Log Page: Supported 00:21:47.499 Get Log Page Extended Data: Supported 00:21:47.499 Telemetry Log Pages: Not Supported 00:21:47.499 Persistent Event Log Pages: Not Supported 00:21:47.499 Supported Log Pages Log Page: May Support 00:21:47.499 Commands Supported & Effects Log Page: Not Supported 00:21:47.499 Feature Identifiers & Effects Log Page:May Support 00:21:47.499 NVMe-MI Commands & Effects Log Page: May Support 00:21:47.499 Data Area 4 for Telemetry Log: Not Supported 00:21:47.499 Error Log Page Entries Supported: 128 00:21:47.499 Keep Alive: Supported 00:21:47.499 Keep Alive Granularity: 10000 ms 00:21:47.499 00:21:47.499 NVM Command Set Attributes 00:21:47.499 ========================== 00:21:47.499 Submission Queue Entry Size 00:21:47.499 Max: 64 00:21:47.499 Min: 64 00:21:47.499 Completion Queue Entry Size 00:21:47.499 Max: 16 00:21:47.499 Min: 16 00:21:47.499 Number of Namespaces: 32 00:21:47.499 Compare Command: Supported 00:21:47.499 Write Uncorrectable Command: Not Supported 00:21:47.499 Dataset Management Command: Supported 00:21:47.499 Write Zeroes Command: Supported 00:21:47.499 Set Features Save Field: Not Supported 00:21:47.499 Reservations: Supported 00:21:47.499 Timestamp: Not Supported 00:21:47.499 Copy: Supported 00:21:47.499 Volatile Write Cache: Present 00:21:47.499 Atomic Write Unit (Normal): 1 00:21:47.499 Atomic Write Unit (PFail): 1 00:21:47.499 Atomic Compare & Write Unit: 1 00:21:47.499 Fused Compare & Write: Supported 00:21:47.499 Scatter-Gather List 00:21:47.499 SGL Command Set: Supported 00:21:47.499 SGL Keyed: Supported 00:21:47.499 SGL Bit Bucket Descriptor: Not Supported 00:21:47.499 SGL Metadata Pointer: Not Supported 00:21:47.499 Oversized SGL: Not Supported 00:21:47.499 SGL Metadata Address: Not Supported 00:21:47.499 SGL Offset: Supported 00:21:47.499 Transport SGL Data Block: Not Supported 00:21:47.499 Replay Protected Memory Block: Not Supported 00:21:47.499 00:21:47.499 Firmware Slot Information 00:21:47.499 ========================= 00:21:47.499 Active slot: 1 00:21:47.499 Slot 1 Firmware Revision: 25.01 00:21:47.499 00:21:47.499 00:21:47.499 Commands Supported and Effects 00:21:47.499 ============================== 00:21:47.499 Admin Commands 00:21:47.499 -------------- 00:21:47.499 Get Log Page (02h): Supported 00:21:47.499 Identify (06h): Supported 00:21:47.499 Abort (08h): Supported 00:21:47.499 Set Features (09h): Supported 00:21:47.499 Get Features (0Ah): Supported 00:21:47.499 Asynchronous Event Request (0Ch): Supported 00:21:47.499 Keep Alive (18h): Supported 00:21:47.499 I/O Commands 00:21:47.499 ------------ 00:21:47.499 Flush (00h): Supported LBA-Change 00:21:47.499 Write (01h): Supported LBA-Change 00:21:47.499 Read (02h): Supported 00:21:47.499 Compare (05h): Supported 00:21:47.499 Write Zeroes (08h): Supported LBA-Change 00:21:47.499 Dataset Management (09h): Supported LBA-Change 00:21:47.499 Copy (19h): Supported LBA-Change 00:21:47.499 00:21:47.499 Error Log 00:21:47.499 ========= 00:21:47.499 00:21:47.499 Arbitration 00:21:47.499 =========== 00:21:47.499 Arbitration Burst: 1 00:21:47.499 00:21:47.499 Power Management 00:21:47.499 ================ 00:21:47.499 Number of Power States: 1 00:21:47.499 Current Power State: Power State #0 00:21:47.499 Power State #0: 00:21:47.499 Max Power: 0.00 W 00:21:47.499 Non-Operational State: Operational 00:21:47.499 Entry Latency: Not Reported 00:21:47.499 Exit Latency: Not Reported 00:21:47.499 Relative Read Throughput: 0 00:21:47.499 Relative Read Latency: 0 00:21:47.499 Relative Write Throughput: 0 00:21:47.499 Relative Write Latency: 0 00:21:47.499 Idle Power: Not Reported 00:21:47.499 Active Power: Not Reported 00:21:47.499 Non-Operational Permissive Mode: Not Supported 00:21:47.499 00:21:47.499 Health Information 00:21:47.499 ================== 00:21:47.499 Critical Warnings: 00:21:47.499 Available Spare Space: OK 00:21:47.499 Temperature: OK 00:21:47.499 Device Reliability: OK 00:21:47.499 Read Only: No 00:21:47.499 Volatile Memory Backup: OK 00:21:47.499 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:47.499 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:47.499 Available Spare: 0% 00:21:47.499 Available Spare Threshold: 0% 00:21:47.499 Life Percentage Used:[2024-11-26 21:02:38.188135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.499 [2024-11-26 21:02:38.188147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61c690) 00:21:47.499 [2024-11-26 21:02:38.188158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.499 [2024-11-26 21:02:38.188180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67eb80, cid 7, qid 0 00:21:47.499 [2024-11-26 21:02:38.188336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.499 [2024-11-26 21:02:38.188349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.499 [2024-11-26 21:02:38.188356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.499 [2024-11-26 21:02:38.188362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67eb80) on tqpair=0x61c690 00:21:47.499 [2024-11-26 21:02:38.188410] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:47.499 [2024-11-26 21:02:38.188430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e100) on tqpair=0x61c690 00:21:47.499 [2024-11-26 21:02:38.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.499 [2024-11-26 21:02:38.188449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e280) on tqpair=0x61c690 00:21:47.499 [2024-11-26 21:02:38.188457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.499 [2024-11-26 21:02:38.188465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e400) on tqpair=0x61c690 00:21:47.499 [2024-11-26 21:02:38.188473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.499 [2024-11-26 21:02:38.188481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.499 [2024-11-26 21:02:38.188489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.499 [2024-11-26 21:02:38.188517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.499 [2024-11-26 21:02:38.188525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.499 [2024-11-26 21:02:38.188532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.188542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.188564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.188701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.188716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.188723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.188740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.188765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.188795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.188920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.188936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.188943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.188957] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:47.500 [2024-11-26 21:02:38.188965] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:47.500 [2024-11-26 21:02:38.188981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.188996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.189126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.189141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.189149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.189171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.189314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.189327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.189334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.189356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.189498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.189513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.189520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.189543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.189702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.189718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.189725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.189748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.189892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.189904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.189911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.189933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.189948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.189959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.189979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.190073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.190088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.190095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.190118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.190144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.190165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.190254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.190266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.190273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.190296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.190322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.190346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.190444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.190460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.190467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.190490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.190515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.190536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.190632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.190644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.190651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.190674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.500 [2024-11-26 21:02:38.190708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.500 [2024-11-26 21:02:38.190729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.500 [2024-11-26 21:02:38.190828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.500 [2024-11-26 21:02:38.190843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.500 [2024-11-26 21:02:38.190851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.500 [2024-11-26 21:02:38.190857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.500 [2024-11-26 21:02:38.190873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.190882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.190889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.501 [2024-11-26 21:02:38.190899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.501 [2024-11-26 21:02:38.190919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.501 [2024-11-26 21:02:38.191028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.501 [2024-11-26 21:02:38.191043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.501 [2024-11-26 21:02:38.191050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.501 [2024-11-26 21:02:38.191073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.501 [2024-11-26 21:02:38.191098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.501 [2024-11-26 21:02:38.191119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.501 [2024-11-26 21:02:38.191215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.501 [2024-11-26 21:02:38.191231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.501 [2024-11-26 21:02:38.191238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.501 [2024-11-26 21:02:38.191260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.501 [2024-11-26 21:02:38.191286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.501 [2024-11-26 21:02:38.191306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.501 [2024-11-26 21:02:38.191431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.501 [2024-11-26 21:02:38.191446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.501 [2024-11-26 21:02:38.191453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.501 [2024-11-26 21:02:38.191476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.501 [2024-11-26 21:02:38.191502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.501 [2024-11-26 21:02:38.191522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.501 [2024-11-26 21:02:38.191634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.501 [2024-11-26 21:02:38.191647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.501 [2024-11-26 21:02:38.191654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.191660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.501 [2024-11-26 21:02:38.191676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.195694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.195708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61c690) 00:21:47.501 [2024-11-26 21:02:38.195719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.501 [2024-11-26 21:02:38.195742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x67e580, cid 3, qid 0 00:21:47.501 [2024-11-26 21:02:38.195892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.501 [2024-11-26 21:02:38.195907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.501 [2024-11-26 21:02:38.195914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.501 [2024-11-26 21:02:38.195921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x67e580) on tqpair=0x61c690 00:21:47.501 [2024-11-26 21:02:38.195934] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:47.501 0% 00:21:47.501 Data Units Read: 0 00:21:47.501 Data Units Written: 0 00:21:47.501 Host Read Commands: 0 00:21:47.501 Host Write Commands: 0 00:21:47.501 Controller Busy Time: 0 minutes 00:21:47.501 Power Cycles: 0 00:21:47.501 Power On Hours: 0 hours 00:21:47.501 Unsafe Shutdowns: 0 00:21:47.501 Unrecoverable Media Errors: 0 00:21:47.501 Lifetime Error Log Entries: 0 00:21:47.501 Warning Temperature Time: 0 minutes 00:21:47.501 Critical Temperature Time: 0 minutes 00:21:47.501 00:21:47.501 Number of Queues 00:21:47.501 ================ 00:21:47.501 Number of I/O Submission Queues: 127 00:21:47.501 Number of I/O Completion Queues: 127 00:21:47.501 00:21:47.501 Active Namespaces 00:21:47.501 ================= 00:21:47.501 Namespace ID:1 00:21:47.501 Error Recovery Timeout: Unlimited 00:21:47.501 Command Set Identifier: NVM (00h) 00:21:47.501 Deallocate: Supported 00:21:47.501 Deallocated/Unwritten Error: Not Supported 00:21:47.501 Deallocated Read Value: Unknown 00:21:47.501 Deallocate in Write Zeroes: Not Supported 00:21:47.501 Deallocated Guard Field: 0xFFFF 00:21:47.501 Flush: Supported 00:21:47.501 Reservation: Supported 00:21:47.501 Namespace Sharing Capabilities: Multiple Controllers 00:21:47.501 Size (in LBAs): 131072 (0GiB) 00:21:47.501 Capacity (in LBAs): 131072 (0GiB) 00:21:47.501 Utilization (in LBAs): 131072 (0GiB) 00:21:47.501 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:47.501 EUI64: ABCDEF0123456789 00:21:47.501 UUID: 62cd5619-0adf-4075-a29e-b1c929da1771 00:21:47.501 Thin Provisioning: Not Supported 00:21:47.501 Per-NS Atomic Units: Yes 00:21:47.501 Atomic Boundary Size (Normal): 0 00:21:47.501 Atomic Boundary Size (PFail): 0 00:21:47.501 Atomic Boundary Offset: 0 00:21:47.501 Maximum Single Source Range Length: 65535 00:21:47.501 Maximum Copy Length: 65535 00:21:47.501 Maximum Source Range Count: 1 00:21:47.501 NGUID/EUI64 Never Reused: No 00:21:47.501 Namespace Write Protected: No 00:21:47.501 Number of LBA Formats: 1 00:21:47.501 Current LBA Format: LBA Format #00 00:21:47.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:47.501 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.501 rmmod nvme_tcp 00:21:47.501 rmmod nvme_fabrics 00:21:47.501 rmmod nvme_keyring 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4033262 ']' 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4033262 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4033262 ']' 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4033262 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4033262 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4033262' 00:21:47.501 killing process with pid 4033262 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4033262 00:21:47.501 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4033262 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.760 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.761 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.761 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.761 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.761 21:02:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.663 21:02:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.663 00:21:49.663 real 0m5.645s 00:21:49.663 user 0m5.285s 00:21:49.663 sys 0m1.886s 00:21:49.663 21:02:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.663 21:02:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:49.663 ************************************ 00:21:49.663 END TEST nvmf_identify 00:21:49.663 ************************************ 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.923 ************************************ 00:21:49.923 START TEST nvmf_perf 00:21:49.923 ************************************ 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:49.923 * Looking for test storage... 00:21:49.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:49.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.923 --rc genhtml_branch_coverage=1 00:21:49.923 --rc genhtml_function_coverage=1 00:21:49.923 --rc genhtml_legend=1 00:21:49.923 --rc geninfo_all_blocks=1 00:21:49.923 --rc geninfo_unexecuted_blocks=1 00:21:49.923 00:21:49.923 ' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:49.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.923 --rc genhtml_branch_coverage=1 00:21:49.923 --rc genhtml_function_coverage=1 00:21:49.923 --rc genhtml_legend=1 00:21:49.923 --rc geninfo_all_blocks=1 00:21:49.923 --rc geninfo_unexecuted_blocks=1 00:21:49.923 00:21:49.923 ' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:49.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.923 --rc genhtml_branch_coverage=1 00:21:49.923 --rc genhtml_function_coverage=1 00:21:49.923 --rc genhtml_legend=1 00:21:49.923 --rc geninfo_all_blocks=1 00:21:49.923 --rc geninfo_unexecuted_blocks=1 00:21:49.923 00:21:49.923 ' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:49.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.923 --rc genhtml_branch_coverage=1 00:21:49.923 --rc genhtml_function_coverage=1 00:21:49.923 --rc genhtml_legend=1 00:21:49.923 --rc geninfo_all_blocks=1 00:21:49.923 --rc geninfo_unexecuted_blocks=1 00:21:49.923 00:21:49.923 ' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.923 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.924 21:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.452 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.452 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.452 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:52.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:52.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:52.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:52.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:21:52.453 00:21:52.453 --- 10.0.0.2 ping statistics --- 00:21:52.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.453 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:21:52.453 00:21:52.453 --- 10.0.0.1 ping statistics --- 00:21:52.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.453 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4035352 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4035352 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4035352 ']' 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.453 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.454 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.454 21:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.454 [2024-11-26 21:02:43.002645] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:21:52.454 [2024-11-26 21:02:43.002730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.454 [2024-11-26 21:02:43.082011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.454 [2024-11-26 21:02:43.145130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.454 [2024-11-26 21:02:43.145192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.454 [2024-11-26 21:02:43.145208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.454 [2024-11-26 21:02:43.145222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.454 [2024-11-26 21:02:43.145233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.454 [2024-11-26 21:02:43.146929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.454 [2024-11-26 21:02:43.146997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.454 [2024-11-26 21:02:43.147099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.454 [2024-11-26 21:02:43.147102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:52.454 21:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:55.734 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:55.734 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:55.991 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:55.991 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.250 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:56.250 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:56.250 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:56.250 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:56.250 21:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.508 [2024-11-26 21:02:47.239802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.508 21:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.765 21:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:56.765 21:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.024 21:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:57.024 21:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:57.282 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.539 [2024-11-26 21:02:48.331811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.539 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.797 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:57.797 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:57.797 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:57.797 21:02:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:59.170 Initializing NVMe Controllers 00:21:59.170 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:59.170 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:59.170 Initialization complete. Launching workers. 00:21:59.170 ======================================================== 00:21:59.170 Latency(us) 00:21:59.170 Device Information : IOPS MiB/s Average min max 00:21:59.170 PCIE (0000:88:00.0) NSID 1 from core 0: 85768.59 335.03 372.52 40.01 5376.84 00:21:59.170 ======================================================== 00:21:59.170 Total : 85768.59 335.03 372.52 40.01 5376.84 00:21:59.170 00:21:59.170 21:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:00.542 Initializing NVMe Controllers 00:22:00.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:00.542 Initialization complete. Launching workers. 00:22:00.542 ======================================================== 00:22:00.542 Latency(us) 00:22:00.542 Device Information : IOPS MiB/s Average min max 00:22:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.74 0.28 14212.12 159.76 45904.01 00:22:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.86 0.16 25448.79 7965.89 47898.26 00:22:00.542 ======================================================== 00:22:00.542 Total : 113.60 0.44 18253.38 159.76 47898.26 00:22:00.542 00:22:00.542 21:02:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:02.442 Initializing NVMe Controllers 00:22:02.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:02.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:02.442 Initialization complete. Launching workers. 00:22:02.442 ======================================================== 00:22:02.442 Latency(us) 00:22:02.442 Device Information : IOPS MiB/s Average min max 00:22:02.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8388.97 32.77 3814.96 564.93 7507.70 00:22:02.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.98 15.04 8359.22 6803.79 16339.00 00:22:02.442 ======================================================== 00:22:02.442 Total : 12237.95 47.80 5244.18 564.93 16339.00 00:22:02.442 00:22:02.442 21:02:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:02.442 21:02:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:02.442 21:02:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.972 Initializing NVMe Controllers 00:22:04.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.972 Controller IO queue size 128, less than required. 00:22:04.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.972 Controller IO queue size 128, less than required. 00:22:04.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.972 Initialization complete. Launching workers. 00:22:04.972 ======================================================== 00:22:04.972 Latency(us) 00:22:04.972 Device Information : IOPS MiB/s Average min max 00:22:04.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1419.01 354.75 91604.11 69821.10 155428.01 00:22:04.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.45 149.11 224325.26 107014.77 326931.24 00:22:04.972 ======================================================== 00:22:04.972 Total : 2015.46 503.86 130881.42 69821.10 326931.24 00:22:04.972 00:22:04.972 21:02:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:04.972 No valid NVMe controllers or AIO or URING devices found 00:22:04.972 Initializing NVMe Controllers 00:22:04.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.972 Controller IO queue size 128, less than required. 00:22:04.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.972 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:04.972 Controller IO queue size 128, less than required. 00:22:04.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.972 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:04.972 WARNING: Some requested NVMe devices were skipped 00:22:04.972 21:02:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:08.254 Initializing NVMe Controllers 00:22:08.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.254 Controller IO queue size 128, less than required. 00:22:08.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:08.254 Controller IO queue size 128, less than required. 00:22:08.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:08.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:08.254 Initialization complete. Launching workers. 00:22:08.254 00:22:08.254 ==================== 00:22:08.254 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:08.254 TCP transport: 00:22:08.254 polls: 16148 00:22:08.254 idle_polls: 10812 00:22:08.254 sock_completions: 5336 00:22:08.254 nvme_completions: 6089 00:22:08.254 submitted_requests: 9178 00:22:08.254 queued_requests: 1 00:22:08.254 00:22:08.254 ==================== 00:22:08.254 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:08.254 TCP transport: 00:22:08.254 polls: 14460 00:22:08.254 idle_polls: 6435 00:22:08.254 sock_completions: 8025 00:22:08.254 nvme_completions: 4931 00:22:08.254 submitted_requests: 7388 00:22:08.254 queued_requests: 1 00:22:08.254 ======================================================== 00:22:08.254 Latency(us) 00:22:08.254 Device Information : IOPS MiB/s Average min max 00:22:08.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1521.97 380.49 86073.48 56144.69 157576.53 00:22:08.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1232.48 308.12 106276.71 54715.62 146693.63 00:22:08.254 ======================================================== 00:22:08.254 Total : 2754.45 688.61 95113.41 54715.62 157576.53 00:22:08.254 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.254 rmmod nvme_tcp 00:22:08.254 rmmod nvme_fabrics 00:22:08.254 rmmod nvme_keyring 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4035352 ']' 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4035352 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4035352 ']' 00:22:08.254 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4035352 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4035352 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4035352' 00:22:08.255 killing process with pid 4035352 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4035352 00:22:08.255 21:02:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4035352 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.629 21:03:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.165 00:22:12.165 real 0m21.921s 00:22:12.165 user 1m7.893s 00:22:12.165 sys 0m5.708s 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.165 ************************************ 00:22:12.165 END TEST nvmf_perf 00:22:12.165 ************************************ 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.165 ************************************ 00:22:12.165 START TEST nvmf_fio_host 00:22:12.165 ************************************ 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.165 * Looking for test storage... 00:22:12.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.165 --rc genhtml_branch_coverage=1 00:22:12.165 --rc genhtml_function_coverage=1 00:22:12.165 --rc genhtml_legend=1 00:22:12.165 --rc geninfo_all_blocks=1 00:22:12.165 --rc geninfo_unexecuted_blocks=1 00:22:12.165 00:22:12.165 ' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.165 --rc genhtml_branch_coverage=1 00:22:12.165 --rc genhtml_function_coverage=1 00:22:12.165 --rc genhtml_legend=1 00:22:12.165 --rc geninfo_all_blocks=1 00:22:12.165 --rc geninfo_unexecuted_blocks=1 00:22:12.165 00:22:12.165 ' 00:22:12.165 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.166 --rc genhtml_branch_coverage=1 00:22:12.166 --rc genhtml_function_coverage=1 00:22:12.166 --rc genhtml_legend=1 00:22:12.166 --rc geninfo_all_blocks=1 00:22:12.166 --rc geninfo_unexecuted_blocks=1 00:22:12.166 00:22:12.166 ' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.166 --rc genhtml_branch_coverage=1 00:22:12.166 --rc genhtml_function_coverage=1 00:22:12.166 --rc genhtml_legend=1 00:22:12.166 --rc geninfo_all_blocks=1 00:22:12.166 --rc geninfo_unexecuted_blocks=1 00:22:12.166 00:22:12.166 ' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.166 21:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.073 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:14.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:14.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:14.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:14.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.074 21:03:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:22:14.333 00:22:14.333 --- 10.0.0.2 ping statistics --- 00:22:14.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.333 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:14.333 00:22:14.333 --- 10.0.0.1 ping statistics --- 00:22:14.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.333 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4039448 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4039448 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4039448 ']' 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.333 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.333 [2024-11-26 21:03:05.131456] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:22:14.333 [2024-11-26 21:03:05.131525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.334 [2024-11-26 21:03:05.203379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.334 [2024-11-26 21:03:05.263027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.334 [2024-11-26 21:03:05.263095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.334 [2024-11-26 21:03:05.263125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.334 [2024-11-26 21:03:05.263146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.334 [2024-11-26 21:03:05.263157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.334 [2024-11-26 21:03:05.264881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.334 [2024-11-26 21:03:05.264911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.334 [2024-11-26 21:03:05.264962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.334 [2024-11-26 21:03:05.264965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.592 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.592 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:14.592 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:14.849 [2024-11-26 21:03:05.694763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.849 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:14.849 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.849 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.850 21:03:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:15.415 Malloc1 00:22:15.415 21:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.674 21:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:15.932 21:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.189 [2024-11-26 21:03:06.910180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.189 21:03:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:16.448 21:03:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:16.705 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:16.705 fio-3.35 00:22:16.705 Starting 1 thread 00:22:19.234 00:22:19.234 test: (groupid=0, jobs=1): err= 0: pid=4040086: Tue Nov 26 21:03:09 2024 00:22:19.234 read: IOPS=8596, BW=33.6MiB/s (35.2MB/s)(67.4MiB/2006msec) 00:22:19.234 slat (nsec): min=1968, max=152422, avg=2579.57, stdev=2165.32 00:22:19.234 clat (usec): min=2555, max=14565, avg=8191.58, stdev=661.77 00:22:19.234 lat (usec): min=2583, max=14568, avg=8194.16, stdev=661.65 00:22:19.234 clat percentiles (usec): 00:22:19.234 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7701], 00:22:19.234 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8356], 00:22:19.234 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:22:19.234 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[12780], 99.95th=[13566], 00:22:19.234 | 99.99th=[14484] 00:22:19.234 bw ( KiB/s): min=33552, max=34864, per=99.90%, avg=34352.00, stdev=568.24, samples=4 00:22:19.234 iops : min= 8388, max= 8716, avg=8588.00, stdev=142.06, samples=4 00:22:19.234 write: IOPS=8595, BW=33.6MiB/s (35.2MB/s)(67.4MiB/2006msec); 0 zone resets 00:22:19.234 slat (usec): min=2, max=183, avg= 2.69, stdev= 1.86 00:22:19.234 clat (usec): min=1442, max=12882, avg=6650.80, stdev=549.44 00:22:19.234 lat (usec): min=1450, max=12884, avg=6653.49, stdev=549.40 00:22:19.234 clat percentiles (usec): 00:22:19.234 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6259], 00:22:19.234 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:22:19.234 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7308], 95.00th=[ 7439], 00:22:19.234 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[10159], 99.95th=[11731], 00:22:19.234 | 99.99th=[12911] 00:22:19.234 bw ( KiB/s): min=34176, max=34512, per=99.96%, avg=34366.00, stdev=165.04, samples=4 00:22:19.234 iops : min= 8544, max= 8628, avg=8591.50, stdev=41.26, samples=4 00:22:19.234 lat (msec) : 2=0.01%, 4=0.11%, 10=99.65%, 20=0.23% 00:22:19.234 cpu : usr=57.06%, sys=39.05%, ctx=78, majf=0, minf=31 00:22:19.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:19.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:19.234 issued rwts: total=17244,17242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:19.234 00:22:19.234 Run status group 0 (all jobs): 00:22:19.234 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.4MiB (70.6MB), run=2006-2006msec 00:22:19.234 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.4MiB (70.6MB), run=2006-2006msec 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:19.234 21:03:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:19.234 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:19.234 fio-3.35 00:22:19.234 Starting 1 thread 00:22:21.760 00:22:21.760 test: (groupid=0, jobs=1): err= 0: pid=4040781: Tue Nov 26 21:03:12 2024 00:22:21.760 read: IOPS=8199, BW=128MiB/s (134MB/s)(257MiB/2007msec) 00:22:21.760 slat (usec): min=2, max=121, avg= 3.59, stdev= 1.82 00:22:21.760 clat (usec): min=2236, max=17444, avg=9136.44, stdev=2202.16 00:22:21.760 lat (usec): min=2241, max=17447, avg=9140.03, stdev=2202.20 00:22:21.760 clat percentiles (usec): 00:22:21.760 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7177], 00:22:21.760 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:22:21.760 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11994], 95.00th=[13042], 00:22:21.760 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:22:21.760 | 99.99th=[17171] 00:22:21.760 bw ( KiB/s): min=62624, max=69056, per=50.43%, avg=66168.00, stdev=3178.53, samples=4 00:22:21.760 iops : min= 3914, max= 4316, avg=4135.50, stdev=198.66, samples=4 00:22:21.760 write: IOPS=4602, BW=71.9MiB/s (75.4MB/s)(135MiB/1878msec); 0 zone resets 00:22:21.760 slat (usec): min=30, max=171, avg=33.12, stdev= 5.14 00:22:21.760 clat (usec): min=6842, max=20861, avg=11603.49, stdev=2084.19 00:22:21.760 lat (usec): min=6875, max=20895, avg=11636.61, stdev=2084.08 00:22:21.760 clat percentiles (usec): 00:22:21.760 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:22:21.760 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:22:21.760 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[15270], 00:22:21.760 | 99.00th=[17171], 99.50th=[17957], 99.90th=[20317], 99.95th=[20579], 00:22:21.760 | 99.99th=[20841] 00:22:21.760 bw ( KiB/s): min=64064, max=72704, per=93.30%, avg=68712.00, stdev=4292.56, samples=4 00:22:21.760 iops : min= 4004, max= 4544, avg=4294.50, stdev=268.29, samples=4 00:22:21.760 lat (msec) : 4=0.19%, 10=51.10%, 20=48.67%, 50=0.05% 00:22:21.760 cpu : usr=73.98%, sys=23.43%, ctx=39, majf=0, minf=45 00:22:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.760 issued rwts: total=16457,8644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.760 00:22:21.760 Run status group 0 (all jobs): 00:22:21.760 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (270MB), run=2007-2007msec 00:22:21.760 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=135MiB (142MB), run=1878-1878msec 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.760 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.018 rmmod nvme_tcp 00:22:22.018 rmmod nvme_fabrics 00:22:22.018 rmmod nvme_keyring 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4039448 ']' 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4039448 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4039448 ']' 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4039448 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4039448 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4039448' 00:22:22.018 killing process with pid 4039448 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4039448 00:22:22.018 21:03:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4039448 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.275 21:03:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.253 00:22:24.253 real 0m12.494s 00:22:24.253 user 0m36.217s 00:22:24.253 sys 0m4.459s 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.253 ************************************ 00:22:24.253 END TEST nvmf_fio_host 00:22:24.253 ************************************ 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.253 ************************************ 00:22:24.253 START TEST nvmf_failover 00:22:24.253 ************************************ 00:22:24.253 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:24.513 * Looking for test storage... 00:22:24.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.513 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.513 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.514 --rc genhtml_branch_coverage=1 00:22:24.514 --rc genhtml_function_coverage=1 00:22:24.514 --rc genhtml_legend=1 00:22:24.514 --rc geninfo_all_blocks=1 00:22:24.514 --rc geninfo_unexecuted_blocks=1 00:22:24.514 00:22:24.514 ' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.514 --rc genhtml_branch_coverage=1 00:22:24.514 --rc genhtml_function_coverage=1 00:22:24.514 --rc genhtml_legend=1 00:22:24.514 --rc geninfo_all_blocks=1 00:22:24.514 --rc geninfo_unexecuted_blocks=1 00:22:24.514 00:22:24.514 ' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.514 --rc genhtml_branch_coverage=1 00:22:24.514 --rc genhtml_function_coverage=1 00:22:24.514 --rc genhtml_legend=1 00:22:24.514 --rc geninfo_all_blocks=1 00:22:24.514 --rc geninfo_unexecuted_blocks=1 00:22:24.514 00:22:24.514 ' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.514 --rc genhtml_branch_coverage=1 00:22:24.514 --rc genhtml_function_coverage=1 00:22:24.514 --rc genhtml_legend=1 00:22:24.514 --rc geninfo_all_blocks=1 00:22:24.514 --rc geninfo_unexecuted_blocks=1 00:22:24.514 00:22:24.514 ' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.514 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.515 21:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:26.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.418 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:26.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:26.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:26.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.419 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:22:26.677 00:22:26.677 --- 10.0.0.2 ping statistics --- 00:22:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.677 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:22:26.677 00:22:26.677 --- 10.0.0.1 ping statistics --- 00:22:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.677 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4042987 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4042987 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4042987 ']' 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.677 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.677 [2024-11-26 21:03:17.480009] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:22:26.677 [2024-11-26 21:03:17.480115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.677 [2024-11-26 21:03:17.561422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:26.936 [2024-11-26 21:03:17.623769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.936 [2024-11-26 21:03:17.623821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.936 [2024-11-26 21:03:17.623835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.936 [2024-11-26 21:03:17.623846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.936 [2024-11-26 21:03:17.623857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.936 [2024-11-26 21:03:17.625399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.936 [2024-11-26 21:03:17.625511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.936 [2024-11-26 21:03:17.625514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.936 21:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:27.194 [2024-11-26 21:03:18.031358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.194 21:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:27.453 Malloc0 00:22:27.453 21:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.711 21:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.969 21:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.227 [2024-11-26 21:03:19.146297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.487 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.487 [2024-11-26 21:03:19.410942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.744 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:28.744 [2024-11-26 21:03:19.671727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4043276 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4043276 /var/tmp/bdevperf.sock 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4043276 ']' 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.002 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:29.261 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.261 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:29.261 21:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:29.520 NVMe0n1 00:22:29.520 21:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:30.096 00:22:30.096 21:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4043411 00:22:30.096 21:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.096 21:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:31.026 21:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.320 [2024-11-26 21:03:22.056967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.320 [2024-11-26 21:03:22.057307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 [2024-11-26 21:03:22.057637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235460 is same with the state(6) to be set 00:22:31.321 21:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:34.599 21:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:34.857 00:22:34.858 21:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:35.116 [2024-11-26 21:03:25.939827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.939988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.940000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.940012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.940024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.940050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.116 [2024-11-26 21:03:25.940062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 [2024-11-26 21:03:25.940291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235f10 is same with the state(6) to be set 00:22:35.117 21:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:38.399 21:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.399 [2024-11-26 21:03:29.265957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.399 21:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:39.774 21:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:39.774 [2024-11-26 21:03:30.557254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 [2024-11-26 21:03:30.557410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb280 is same with the state(6) to be set 00:22:39.774 21:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4043411 00:22:45.039 { 00:22:45.039 "results": [ 00:22:45.039 { 00:22:45.039 "job": "NVMe0n1", 00:22:45.039 "core_mask": "0x1", 00:22:45.039 "workload": "verify", 00:22:45.039 "status": "finished", 00:22:45.039 "verify_range": { 00:22:45.039 "start": 0, 00:22:45.039 "length": 16384 00:22:45.039 }, 00:22:45.039 "queue_depth": 128, 00:22:45.039 "io_size": 4096, 00:22:45.039 "runtime": 15.004058, 00:22:45.039 "iops": 8181.85320264691, 00:22:45.039 "mibps": 31.960364072839493, 00:22:45.039 "io_failed": 15957, 00:22:45.039 "io_timeout": 0, 00:22:45.039 "avg_latency_us": 13818.155911129053, 00:22:45.039 "min_latency_us": 573.44, 00:22:45.039 "max_latency_us": 41748.85925925926 00:22:45.039 } 00:22:45.039 ], 00:22:45.039 "core_count": 1 00:22:45.039 } 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4043276 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4043276 ']' 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4043276 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4043276 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4043276' 00:22:45.039 killing process with pid 4043276 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4043276 00:22:45.039 21:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4043276 00:22:45.310 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:45.310 [2024-11-26 21:03:19.738126] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:22:45.310 [2024-11-26 21:03:19.738206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043276 ] 00:22:45.310 [2024-11-26 21:03:19.805817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.310 [2024-11-26 21:03:19.864332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.310 Running I/O for 15 seconds... 00:22:45.310 8355.00 IOPS, 32.64 MiB/s [2024-11-26T20:03:36.248Z] [2024-11-26 21:03:22.058389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.310 [2024-11-26 21:03:22.059436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.310 [2024-11-26 21:03:22.059459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.059940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.059965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.311 [2024-11-26 21:03:22.060885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.060936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.060971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.060996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.311 [2024-11-26 21:03:22.061461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.311 [2024-11-26 21:03:22.061486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.061944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.061972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.062965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.062989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.312 [2024-11-26 21:03:22.063555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.312 [2024-11-26 21:03:22.063583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.063971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.063996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.064954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.313 [2024-11-26 21:03:22.064982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-26 21:03:22.065033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.313 [2024-11-26 21:03:22.065082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.313 [2024-11-26 21:03:22.065218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.313 [2024-11-26 21:03:22.065267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.313 [2024-11-26 21:03:22.065314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.313 [2024-11-26 21:03:22.065360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f570 is same with the state(6) to be set 00:22:45.313 [2024-11-26 21:03:22.065737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.313 [2024-11-26 21:03:22.065763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.313 [2024-11-26 21:03:22.065784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81224 len:8 PRP1 0x0 PRP2 0x0 00:22:45.313 [2024-11-26 21:03:22.065806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.313 [2024-11-26 21:03:22.065852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.313 [2024-11-26 21:03:22.065870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:22:45.313 [2024-11-26 21:03:22.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.313 [2024-11-26 21:03:22.065932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.313 [2024-11-26 21:03:22.065951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:22:45.313 [2024-11-26 21:03:22.065973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.313 [2024-11-26 21:03:22.065995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.313 [2024-11-26 21:03:22.066015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.066925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.066947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.066967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.066986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.314 [2024-11-26 21:03:22.067653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.314 [2024-11-26 21:03:22.067677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:22:45.314 [2024-11-26 21:03:22.067712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.314 [2024-11-26 21:03:22.067738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.067761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.067779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.067801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.067825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.067844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.067891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.067913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.067935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.067953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.067975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.067998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.068920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.068939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.068963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.068986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.069007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.069026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.069047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.069071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.069089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.069114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.069138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.069161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.069181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.069200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.069222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.069246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.069265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.069284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.069307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.069329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.069349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.069368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.075580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.075621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.075644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.075664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.075696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.075723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.075743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.075765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.075786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.075809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.075829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.075847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81184 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.075870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.315 [2024-11-26 21:03:22.075894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.315 [2024-11-26 21:03:22.075914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.315 [2024-11-26 21:03:22.075934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81192 len:8 PRP1 0x0 PRP2 0x0 00:22:45.315 [2024-11-26 21:03:22.075955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.075977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81200 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81240 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.076909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.076930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.076955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.076977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.316 [2024-11-26 21:03:22.077909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.316 [2024-11-26 21:03:22.077930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:22:45.316 [2024-11-26 21:03:22.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.316 [2024-11-26 21:03:22.077987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.078930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.078949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.079901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.079922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:22:45.317 [2024-11-26 21:03:22.079943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.317 [2024-11-26 21:03:22.079987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.317 [2024-11-26 21:03:22.080007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.317 [2024-11-26 21:03:22.080025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.080914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.080934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.080953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81680 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.080992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81696 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.318 [2024-11-26 21:03:22.081916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.318 [2024-11-26 21:03:22.081937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.318 [2024-11-26 21:03:22.081956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:22:45.318 [2024-11-26 21:03:22.081991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.082814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.082835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.082856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.082879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.088767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.088794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81208 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.088816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.088845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.319 [2024-11-26 21:03:22.088864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.319 [2024-11-26 21:03:22.088883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81216 len:8 PRP1 0x0 PRP2 0x0 00:22:45.319 [2024-11-26 21:03:22.088903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:22.089000] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:45.319 [2024-11-26 21:03:22.089037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:45.319 [2024-11-26 21:03:22.089131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8f570 (9): Bad file descriptor 00:22:45.319 [2024-11-26 21:03:22.093390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:45.319 [2024-11-26 21:03:22.253255] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:45.319 7562.50 IOPS, 29.54 MiB/s [2024-11-26T20:03:36.257Z] 7815.00 IOPS, 30.53 MiB/s [2024-11-26T20:03:36.257Z] 7979.75 IOPS, 31.17 MiB/s [2024-11-26T20:03:36.257Z] 8050.60 IOPS, 31.45 MiB/s [2024-11-26T20:03:36.257Z] [2024-11-26 21:03:25.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.940784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.940824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.940851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.940903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.940929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.940953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.940993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.319 [2024-11-26 21:03:25.941310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.319 [2024-11-26 21:03:25.941332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-11-26 21:03:25.941621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.941955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.941980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.942940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.942968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-11-26 21:03:25.943351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-11-26 21:03:25.943375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.943959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-11-26 21:03:25.944538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.944949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.944973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.321 [2024-11-26 21:03:25.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.321 [2024-11-26 21:03:25.945369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118544 len:8 PRP1 0x0 PRP2 0x0 00:22:45.321 [2024-11-26 21:03:25.945390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.321 [2024-11-26 21:03:25.945517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.321 [2024-11-26 21:03:25.945566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-11-26 21:03:25.945591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.322 [2024-11-26 21:03:25.945614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.945638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.322 [2024-11-26 21:03:25.945662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.945702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f570 is same with the state(6) to be set 00:22:45.322 [2024-11-26 21:03:25.945941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.945967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118552 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118560 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118568 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118576 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118584 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118592 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118600 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118608 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118616 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118624 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118632 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.946929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.946946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.946966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118640 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.946988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118648 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118656 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118664 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118672 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118680 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117856 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117864 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117872 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117880 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-11-26 21:03:25.947815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-11-26 21:03:25.947836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117888 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-11-26 21:03:25.947857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-11-26 21:03:25.947880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.947900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.947918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117896 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.947941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.947963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.947990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117904 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117912 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117920 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117928 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117936 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117944 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117952 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117960 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117968 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117976 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117984 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.948940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.948960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.948980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117992 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118000 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118008 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118016 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118024 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118032 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118040 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117664 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117672 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.323 [2024-11-26 21:03:25.949725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.323 [2024-11-26 21:03:25.949744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.323 [2024-11-26 21:03:25.949763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117680 len:8 PRP1 0x0 PRP2 0x0 00:22:45.323 [2024-11-26 21:03:25.949786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.949808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.949827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117688 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.949867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.949892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.949911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.949929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117696 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.949999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117704 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117712 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117720 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117728 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117736 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117744 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117752 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117760 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117768 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117776 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.950904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.950924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117784 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.950951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.950977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.951005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.951040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117792 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.951084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.951111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.951130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118048 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.951152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.951174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.951194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.951215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118056 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.951258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.951278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.951296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118064 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.951319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.951341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.951361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.951380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118072 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.951423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.957471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.957498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118080 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.957522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.957546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.957565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.957585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118088 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.957605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.957628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.957647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.957671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118096 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.957719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.957745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.957766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.324 [2024-11-26 21:03:25.957786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118104 len:8 PRP1 0x0 PRP2 0x0 00:22:45.324 [2024-11-26 21:03:25.957807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.324 [2024-11-26 21:03:25.957831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.324 [2024-11-26 21:03:25.957850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.957869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118112 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.957891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.957913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.957933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.957952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118120 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.957972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118128 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118136 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118144 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118152 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118160 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118168 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118176 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118184 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118192 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118200 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118208 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.958914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.958936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.958957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.958988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118216 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118224 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118232 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118240 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118248 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118256 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118264 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118272 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118280 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118288 len:8 PRP1 0x0 PRP2 0x0 00:22:45.325 [2024-11-26 21:03:25.959779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.325 [2024-11-26 21:03:25.959803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.325 [2024-11-26 21:03:25.959822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.325 [2024-11-26 21:03:25.959841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118296 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.959863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.959885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.959905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118304 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.959945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.959967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118312 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118320 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118328 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118336 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118344 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118352 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118360 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118368 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118376 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118384 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118392 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.960911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.960931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118400 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.960954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.960996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118408 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118416 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118424 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117800 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117808 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117816 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117824 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117832 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117840 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.326 [2024-11-26 21:03:25.961766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.326 [2024-11-26 21:03:25.961785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.326 [2024-11-26 21:03:25.961805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117848 len:8 PRP1 0x0 PRP2 0x0 00:22:45.326 [2024-11-26 21:03:25.961826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.961849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.961867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.961885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118432 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.961909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.961930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.961956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.961993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118440 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118448 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118456 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118464 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118472 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118480 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118488 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118496 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118504 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118512 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118520 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.962900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.962920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118528 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.962942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.962986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.963005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.963022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118536 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.963056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.963077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.327 [2024-11-26 21:03:25.963094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.327 [2024-11-26 21:03:25.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118544 len:8 PRP1 0x0 PRP2 0x0 00:22:45.327 [2024-11-26 21:03:25.963133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:25.963210] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:45.327 [2024-11-26 21:03:25.963238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:45.327 [2024-11-26 21:03:25.963303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8f570 (9): Bad file descriptor 00:22:45.327 [2024-11-26 21:03:25.967495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.327 [2024-11-26 21:03:26.121195] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:45.327 7865.00 IOPS, 30.72 MiB/s [2024-11-26T20:03:36.265Z] 7958.14 IOPS, 31.09 MiB/s [2024-11-26T20:03:36.265Z] 8023.25 IOPS, 31.34 MiB/s [2024-11-26T20:03:36.265Z] 8075.89 IOPS, 31.55 MiB/s [2024-11-26T20:03:36.265Z] [2024-11-26 21:03:30.558227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.327 [2024-11-26 21:03:30.558858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.327 [2024-11-26 21:03:30.558882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.558908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.558932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.558956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.559588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.559946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.328 [2024-11-26 21:03:30.559970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.328 [2024-11-26 21:03:30.560715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.328 [2024-11-26 21:03:30.560744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.560769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.560798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.560822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.560848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.560872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.560898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.560974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.560999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.329 [2024-11-26 21:03:30.561768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.561818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.561868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.561918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.561946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.561969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.329 [2024-11-26 21:03:30.562796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.329 [2024-11-26 21:03:30.562820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.562871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.562896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.562920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.562946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.562972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.563882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.563960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.563986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.564056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.564108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.564157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.330 [2024-11-26 21:03:30.564207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.330 [2024-11-26 21:03:30.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.330 [2024-11-26 21:03:30.564922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.331 [2024-11-26 21:03:30.564945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.331 [2024-11-26 21:03:30.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.331 [2024-11-26 21:03:30.565077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.331 [2024-11-26 21:03:30.565098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:22:45.331 [2024-11-26 21:03:30.565118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565207] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:45.331 [2024-11-26 21:03:30.565276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.331 [2024-11-26 21:03:30.565304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.331 [2024-11-26 21:03:30.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.331 [2024-11-26 21:03:30.565402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.331 [2024-11-26 21:03:30.565449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.331 [2024-11-26 21:03:30.565478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:45.331 [2024-11-26 21:03:30.565560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8f570 (9): Bad file descriptor 00:22:45.331 [2024-11-26 21:03:30.569690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:45.331 [2024-11-26 21:03:30.641523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:45.331 8057.70 IOPS, 31.48 MiB/s [2024-11-26T20:03:36.269Z] 8095.45 IOPS, 31.62 MiB/s [2024-11-26T20:03:36.269Z] 8115.58 IOPS, 31.70 MiB/s [2024-11-26T20:03:36.269Z] 8146.46 IOPS, 31.82 MiB/s [2024-11-26T20:03:36.269Z] 8166.14 IOPS, 31.90 MiB/s 00:22:45.331 Latency(us) 00:22:45.331 [2024-11-26T20:03:36.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.331 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:45.331 Verification LBA range: start 0x0 length 0x4000 00:22:45.331 NVMe0n1 : 15.00 8181.85 31.96 1063.51 0.00 13818.16 573.44 41748.86 00:22:45.331 [2024-11-26T20:03:36.269Z] =================================================================================================================== 00:22:45.331 [2024-11-26T20:03:36.269Z] Total : 8181.85 31.96 1063.51 0.00 13818.16 573.44 41748.86 00:22:45.331 Received shutdown signal, test time was about 15.000000 seconds 00:22:45.331 00:22:45.331 Latency(us) 00:22:45.331 [2024-11-26T20:03:36.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.331 [2024-11-26T20:03:36.269Z] =================================================================================================================== 00:22:45.331 [2024-11-26T20:03:36.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4045256 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4045256 /var/tmp/bdevperf.sock 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4045256 ']' 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.331 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.589 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.589 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:45.589 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.846 [2024-11-26 21:03:36.710889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.846 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:46.104 [2024-11-26 21:03:36.975581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:46.104 21:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.669 NVMe0n1 00:22:46.669 21:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.927 00:22:46.927 21:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:47.493 00:22:47.493 21:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.493 21:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:47.751 21:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.008 21:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:51.287 21:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.288 21:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:51.288 21:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4045922 00:22:51.288 21:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.288 21:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4045922 00:22:52.223 { 00:22:52.223 "results": [ 00:22:52.223 { 00:22:52.223 "job": "NVMe0n1", 00:22:52.223 "core_mask": "0x1", 00:22:52.223 "workload": "verify", 00:22:52.223 "status": "finished", 00:22:52.223 "verify_range": { 00:22:52.223 "start": 0, 00:22:52.223 "length": 16384 00:22:52.223 }, 00:22:52.223 "queue_depth": 128, 00:22:52.223 "io_size": 4096, 00:22:52.223 "runtime": 1.012474, 00:22:52.223 "iops": 8067.367655860792, 00:22:52.223 "mibps": 31.51315490570622, 00:22:52.223 "io_failed": 0, 00:22:52.223 "io_timeout": 0, 00:22:52.223 "avg_latency_us": 15758.970339899155, 00:22:52.223 "min_latency_us": 1832.5807407407408, 00:22:52.223 "max_latency_us": 13204.29037037037 00:22:52.223 } 00:22:52.223 ], 00:22:52.223 "core_count": 1 00:22:52.223 } 00:22:52.223 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.223 [2024-11-26 21:03:36.226410] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:22:52.223 [2024-11-26 21:03:36.226487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045256 ] 00:22:52.223 [2024-11-26 21:03:36.293530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.223 [2024-11-26 21:03:36.350071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.223 [2024-11-26 21:03:38.698371] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:52.223 [2024-11-26 21:03:38.698461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.223 [2024-11-26 21:03:38.698492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.223 [2024-11-26 21:03:38.698516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.223 [2024-11-26 21:03:38.698537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.223 [2024-11-26 21:03:38.698559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.223 [2024-11-26 21:03:38.698581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.223 [2024-11-26 21:03:38.698604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.223 [2024-11-26 21:03:38.698627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.223 [2024-11-26 21:03:38.698657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:52.223 [2024-11-26 21:03:38.698747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:52.223 [2024-11-26 21:03:38.698792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3570 (9): Bad file descriptor 00:22:52.223 [2024-11-26 21:03:38.790143] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:52.223 Running I/O for 1 seconds... 00:22:52.223 7976.00 IOPS, 31.16 MiB/s 00:22:52.223 Latency(us) 00:22:52.223 [2024-11-26T20:03:43.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.223 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:52.223 Verification LBA range: start 0x0 length 0x4000 00:22:52.223 NVMe0n1 : 1.01 8067.37 31.51 0.00 0.00 15758.97 1832.58 13204.29 00:22:52.223 [2024-11-26T20:03:43.161Z] =================================================================================================================== 00:22:52.223 [2024-11-26T20:03:43.161Z] Total : 8067.37 31.51 0.00 0.00 15758.97 1832.58 13204.29 00:22:52.223 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:52.223 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:52.789 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.046 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.046 21:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:53.304 21:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.562 21:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4045256 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4045256 ']' 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4045256 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4045256 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4045256' 00:22:56.842 killing process with pid 4045256 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4045256 00:22:56.842 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4045256 00:22:57.100 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:57.100 21:03:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.359 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.360 rmmod nvme_tcp 00:22:57.360 rmmod nvme_fabrics 00:22:57.360 rmmod nvme_keyring 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4042987 ']' 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4042987 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4042987 ']' 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4042987 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4042987 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4042987' 00:22:57.360 killing process with pid 4042987 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4042987 00:22:57.360 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4042987 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.619 21:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.156 21:03:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.156 00:23:00.156 real 0m35.371s 00:23:00.156 user 2m5.565s 00:23:00.156 sys 0m5.750s 00:23:00.156 21:03:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.156 21:03:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.156 ************************************ 00:23:00.156 END TEST nvmf_failover 00:23:00.156 ************************************ 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.157 ************************************ 00:23:00.157 START TEST nvmf_host_discovery 00:23:00.157 ************************************ 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:00.157 * Looking for test storage... 00:23:00.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.157 --rc genhtml_branch_coverage=1 00:23:00.157 --rc genhtml_function_coverage=1 00:23:00.157 --rc genhtml_legend=1 00:23:00.157 --rc geninfo_all_blocks=1 00:23:00.157 --rc geninfo_unexecuted_blocks=1 00:23:00.157 00:23:00.157 ' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.157 --rc genhtml_branch_coverage=1 00:23:00.157 --rc genhtml_function_coverage=1 00:23:00.157 --rc genhtml_legend=1 00:23:00.157 --rc geninfo_all_blocks=1 00:23:00.157 --rc geninfo_unexecuted_blocks=1 00:23:00.157 00:23:00.157 ' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.157 --rc genhtml_branch_coverage=1 00:23:00.157 --rc genhtml_function_coverage=1 00:23:00.157 --rc genhtml_legend=1 00:23:00.157 --rc geninfo_all_blocks=1 00:23:00.157 --rc geninfo_unexecuted_blocks=1 00:23:00.157 00:23:00.157 ' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.157 --rc genhtml_branch_coverage=1 00:23:00.157 --rc genhtml_function_coverage=1 00:23:00.157 --rc genhtml_legend=1 00:23:00.157 --rc geninfo_all_blocks=1 00:23:00.157 --rc geninfo_unexecuted_blocks=1 00:23:00.157 00:23:00.157 ' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:00.157 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.158 21:03:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.170 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:02.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:02.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:02.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:02.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.171 21:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:23:02.171 00:23:02.171 --- 10.0.0.2 ping statistics --- 00:23:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.171 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:23:02.171 00:23:02.171 --- 10.0.0.1 ping statistics --- 00:23:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.171 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4048544 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4048544 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4048544 ']' 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.171 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 [2024-11-26 21:03:53.090935] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:23:02.171 [2024-11-26 21:03:53.091075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.430 [2024-11-26 21:03:53.167595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.430 [2024-11-26 21:03:53.222723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.430 [2024-11-26 21:03:53.222785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.430 [2024-11-26 21:03:53.222813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.430 [2024-11-26 21:03:53.222824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.430 [2024-11-26 21:03:53.222834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.430 [2024-11-26 21:03:53.223445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.430 [2024-11-26 21:03:53.360417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.430 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.689 [2024-11-26 21:03:53.368638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.689 null0 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.689 null1 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4048683 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4048683 /tmp/host.sock 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4048683 ']' 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:02.689 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.689 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.689 [2024-11-26 21:03:53.444007] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:23:02.689 [2024-11-26 21:03:53.444093] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048683 ] 00:23:02.689 [2024-11-26 21:03:53.515489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.689 [2024-11-26 21:03:53.577904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:02.948 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.207 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 [2024-11-26 21:03:53.986300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:03.208 21:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:03.208 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.466 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:03.466 21:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:04.032 [2024-11-26 21:03:54.767839] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:04.032 [2024-11-26 21:03:54.767874] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:04.032 [2024-11-26 21:03:54.767898] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.032 [2024-11-26 21:03:54.855149] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:04.290 [2024-11-26 21:03:55.036354] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:04.290 [2024-11-26 21:03:55.037454] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e7cfe0:1 started. 00:23:04.290 [2024-11-26 21:03:55.039398] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:04.290 [2024-11-26 21:03:55.039419] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:04.290 [2024-11-26 21:03:55.087412] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e7cfe0 was disconnected and freed. delete nvme_qpair. 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.290 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.549 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.550 [2024-11-26 21:03:55.359625] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e7d360:1 started. 00:23:04.550 [2024-11-26 21:03:55.367500] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e7d360 was disconnected and freed. delete nvme_qpair. 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 [2024-11-26 21:03:55.442899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:04.550 [2024-11-26 21:03:55.443716] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:04.550 [2024-11-26 21:03:55.443771] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:04.550 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:04.809 21:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:04.809 [2024-11-26 21:03:55.571439] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:04.809 [2024-11-26 21:03:55.632311] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:04.809 [2024-11-26 21:03:55.632365] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:04.809 [2024-11-26 21:03:55.632384] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:04.809 [2024-11-26 21:03:55.632393] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:05.742 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 [2024-11-26 21:03:56.663403] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:05.743 [2024-11-26 21:03:56.663457] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:05.743 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:05.743 [2024-11-26 21:03:56.671009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.743 [2024-11-26 21:03:56.671047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.743 [2024-11-26 21:03:56.671076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.743 [2024-11-26 21:03:56.671090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.743 [2024-11-26 21:03:56.671105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.743 [2024-11-26 21:03:56.671119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.743 [2024-11-26 21:03:56.671133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.743 [2024-11-26 21:03:56.671147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.743 [2024-11-26 21:03:56.671160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.003 [2024-11-26 21:03:56.680995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.003 [2024-11-26 21:03:56.691032] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.003 [2024-11-26 21:03:56.691070] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.003 [2024-11-26 21:03:56.691080] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.003 [2024-11-26 21:03:56.691089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.003 [2024-11-26 21:03:56.691136] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.003 [2024-11-26 21:03:56.691365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.003 [2024-11-26 21:03:56.691395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.003 [2024-11-26 21:03:56.691413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.003 [2024-11-26 21:03:56.691437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.003 [2024-11-26 21:03:56.691472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.003 [2024-11-26 21:03:56.691490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.003 [2024-11-26 21:03:56.691508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.003 [2024-11-26 21:03:56.691520] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.003 [2024-11-26 21:03:56.691531] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.003 [2024-11-26 21:03:56.691539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.003 [2024-11-26 21:03:56.701169] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.003 [2024-11-26 21:03:56.701190] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.003 [2024-11-26 21:03:56.701204] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.003 [2024-11-26 21:03:56.701212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.003 [2024-11-26 21:03:56.701251] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.003 [2024-11-26 21:03:56.701485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.003 [2024-11-26 21:03:56.701514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.003 [2024-11-26 21:03:56.701531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.003 [2024-11-26 21:03:56.701553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.003 [2024-11-26 21:03:56.701586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.003 [2024-11-26 21:03:56.701603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.003 [2024-11-26 21:03:56.701617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.003 [2024-11-26 21:03:56.701629] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.003 [2024-11-26 21:03:56.701639] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.003 [2024-11-26 21:03:56.701647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.003 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.004 [2024-11-26 21:03:56.711286] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.004 [2024-11-26 21:03:56.711310] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.004 [2024-11-26 21:03:56.711320] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.711328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.004 [2024-11-26 21:03:56.711368] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.711521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.004 [2024-11-26 21:03:56.711566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.004 [2024-11-26 21:03:56.711585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.004 [2024-11-26 21:03:56.711610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.004 [2024-11-26 21:03:56.711645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.004 [2024-11-26 21:03:56.711663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.004 [2024-11-26 21:03:56.711697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.004 [2024-11-26 21:03:56.711712] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.004 [2024-11-26 21:03:56.711723] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.004 [2024-11-26 21:03:56.711732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.004 [2024-11-26 21:03:56.721403] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.004 [2024-11-26 21:03:56.721427] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.004 [2024-11-26 21:03:56.721437] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.721445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.004 [2024-11-26 21:03:56.721475] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.721639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.004 [2024-11-26 21:03:56.721672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.004 [2024-11-26 21:03:56.721709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.004 [2024-11-26 21:03:56.721745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.004 [2024-11-26 21:03:56.721796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.004 [2024-11-26 21:03:56.721823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.004 [2024-11-26 21:03:56.721848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.004 [2024-11-26 21:03:56.721869] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.004 [2024-11-26 21:03:56.721886] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.004 [2024-11-26 21:03:56.721900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.004 [2024-11-26 21:03:56.731508] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.004 [2024-11-26 21:03:56.731531] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.004 [2024-11-26 21:03:56.731540] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.731548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.004 [2024-11-26 21:03:56.731581] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.731746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.004 [2024-11-26 21:03:56.731779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.004 [2024-11-26 21:03:56.731806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.004 [2024-11-26 21:03:56.731841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.004 [2024-11-26 21:03:56.731891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.004 [2024-11-26 21:03:56.731918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.004 [2024-11-26 21:03:56.731943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.004 [2024-11-26 21:03:56.731965] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.004 [2024-11-26 21:03:56.732001] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.004 [2024-11-26 21:03:56.732016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.004 [2024-11-26 21:03:56.741614] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:06.004 [2024-11-26 21:03:56.741636] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:06.004 [2024-11-26 21:03:56.741645] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.741652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:06.004 [2024-11-26 21:03:56.741703] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:06.004 [2024-11-26 21:03:56.741894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.004 [2024-11-26 21:03:56.741925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4f0e0 with addr=10.0.0.2, port=4420 00:23:06.004 [2024-11-26 21:03:56.741952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f0e0 is same with the state(6) to be set 00:23:06.004 [2024-11-26 21:03:56.741986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4f0e0 (9): Bad file descriptor 00:23:06.004 [2024-11-26 21:03:56.742034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:06.004 [2024-11-26 21:03:56.742060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:06.004 [2024-11-26 21:03:56.742085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:06.004 [2024-11-26 21:03:56.742105] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:06.004 [2024-11-26 21:03:56.742135] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:06.004 [2024-11-26 21:03:56.742149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:06.004 [2024-11-26 21:03:56.750050] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:06.004 [2024-11-26 21:03:56.750082] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:06.004 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.005 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.264 21:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.197 [2024-11-26 21:03:58.018531] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:07.197 [2024-11-26 21:03:58.018569] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:07.197 [2024-11-26 21:03:58.018597] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:07.455 [2024-11-26 21:03:58.146021] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:07.713 [2024-11-26 21:03:58.409440] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:07.713 [2024-11-26 21:03:58.410343] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1e51110:1 started. 00:23:07.713 [2024-11-26 21:03:58.412593] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:07.713 [2024-11-26 21:03:58.412631] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.713 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.713 request: 00:23:07.713 { 00:23:07.713 "name": "nvme", 00:23:07.713 "trtype": "tcp", 00:23:07.713 "traddr": "10.0.0.2", 00:23:07.713 "adrfam": "ipv4", 00:23:07.713 "trsvcid": "8009", 00:23:07.714 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.714 "wait_for_attach": true, 00:23:07.714 "method": "bdev_nvme_start_discovery", 00:23:07.714 "req_id": 1 00:23:07.714 } 00:23:07.714 Got JSON-RPC error response 00:23:07.714 response: 00:23:07.714 { 00:23:07.714 "code": -17, 00:23:07.714 "message": "File exists" 00:23:07.714 } 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.714 [2024-11-26 21:03:58.456469] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1e51110 was disconnected and freed. delete nvme_qpair. 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.714 request: 00:23:07.714 { 00:23:07.714 "name": "nvme_second", 00:23:07.714 "trtype": "tcp", 00:23:07.714 "traddr": "10.0.0.2", 00:23:07.714 "adrfam": "ipv4", 00:23:07.714 "trsvcid": "8009", 00:23:07.714 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.714 "wait_for_attach": true, 00:23:07.714 "method": "bdev_nvme_start_discovery", 00:23:07.714 "req_id": 1 00:23:07.714 } 00:23:07.714 Got JSON-RPC error response 00:23:07.714 response: 00:23:07.714 { 00:23:07.714 "code": -17, 00:23:07.714 "message": "File exists" 00:23:07.714 } 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.714 21:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.086 [2024-11-26 21:03:59.628148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.086 [2024-11-26 21:03:59.628229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4eaa0 with addr=10.0.0.2, port=8010 00:23:09.086 [2024-11-26 21:03:59.628280] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:09.086 [2024-11-26 21:03:59.628308] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:09.086 [2024-11-26 21:03:59.628328] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:10.017 [2024-11-26 21:04:00.630535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.017 [2024-11-26 21:04:00.630611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e876f0 with addr=10.0.0.2, port=8010 00:23:10.017 [2024-11-26 21:04:00.630657] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:10.017 [2024-11-26 21:04:00.630681] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:10.017 [2024-11-26 21:04:00.630730] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:10.948 [2024-11-26 21:04:01.632707] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:10.948 request: 00:23:10.948 { 00:23:10.948 "name": "nvme_second", 00:23:10.948 "trtype": "tcp", 00:23:10.948 "traddr": "10.0.0.2", 00:23:10.948 "adrfam": "ipv4", 00:23:10.948 "trsvcid": "8010", 00:23:10.948 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:10.948 "wait_for_attach": false, 00:23:10.948 "attach_timeout_ms": 3000, 00:23:10.948 "method": "bdev_nvme_start_discovery", 00:23:10.948 "req_id": 1 00:23:10.948 } 00:23:10.948 Got JSON-RPC error response 00:23:10.948 response: 00:23:10.948 { 00:23:10.948 "code": -110, 00:23:10.948 "message": "Connection timed out" 00:23:10.948 } 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4048683 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.948 rmmod nvme_tcp 00:23:10.948 rmmod nvme_fabrics 00:23:10.948 rmmod nvme_keyring 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4048544 ']' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4048544 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4048544 ']' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4048544 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4048544 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4048544' 00:23:10.948 killing process with pid 4048544 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4048544 00:23:10.948 21:04:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4048544 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.206 21:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.737 00:23:13.737 real 0m13.517s 00:23:13.737 user 0m19.507s 00:23:13.737 sys 0m2.822s 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.737 ************************************ 00:23:13.737 END TEST nvmf_host_discovery 00:23:13.737 ************************************ 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.737 ************************************ 00:23:13.737 START TEST nvmf_host_multipath_status 00:23:13.737 ************************************ 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:13.737 * Looking for test storage... 00:23:13.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:13.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.737 --rc genhtml_branch_coverage=1 00:23:13.737 --rc genhtml_function_coverage=1 00:23:13.737 --rc genhtml_legend=1 00:23:13.737 --rc geninfo_all_blocks=1 00:23:13.737 --rc geninfo_unexecuted_blocks=1 00:23:13.737 00:23:13.737 ' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:13.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.737 --rc genhtml_branch_coverage=1 00:23:13.737 --rc genhtml_function_coverage=1 00:23:13.737 --rc genhtml_legend=1 00:23:13.737 --rc geninfo_all_blocks=1 00:23:13.737 --rc geninfo_unexecuted_blocks=1 00:23:13.737 00:23:13.737 ' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:13.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.737 --rc genhtml_branch_coverage=1 00:23:13.737 --rc genhtml_function_coverage=1 00:23:13.737 --rc genhtml_legend=1 00:23:13.737 --rc geninfo_all_blocks=1 00:23:13.737 --rc geninfo_unexecuted_blocks=1 00:23:13.737 00:23:13.737 ' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:13.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.737 --rc genhtml_branch_coverage=1 00:23:13.737 --rc genhtml_function_coverage=1 00:23:13.737 --rc genhtml_legend=1 00:23:13.737 --rc geninfo_all_blocks=1 00:23:13.737 --rc geninfo_unexecuted_blocks=1 00:23:13.737 00:23:13.737 ' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.737 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.738 21:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.640 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:15.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:15.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:15.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:15.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.641 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:23:15.642 00:23:15.642 --- 10.0.0.2 ping statistics --- 00:23:15.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.642 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:23:15.642 00:23:15.642 --- 10.0.0.1 ping statistics --- 00:23:15.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.642 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4051722 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4051722 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4051722 ']' 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.642 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.642 [2024-11-26 21:04:06.513215] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:23:15.642 [2024-11-26 21:04:06.513287] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.901 [2024-11-26 21:04:06.588503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:15.901 [2024-11-26 21:04:06.646775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.901 [2024-11-26 21:04:06.646848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.901 [2024-11-26 21:04:06.646877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.901 [2024-11-26 21:04:06.646888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.901 [2024-11-26 21:04:06.646898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.901 [2024-11-26 21:04:06.648572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.901 [2024-11-26 21:04:06.648579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4051722 00:23:15.901 21:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.159 [2024-11-26 21:04:07.093853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.417 21:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.675 Malloc0 00:23:16.675 21:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:16.933 21:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.191 21:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.449 [2024-11-26 21:04:08.267887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.449 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:17.707 [2024-11-26 21:04:08.524495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4052007 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4052007 /var/tmp/bdevperf.sock 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4052007 ']' 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.707 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.708 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.708 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.708 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.966 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.966 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:17.966 21:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:18.224 21:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:18.789 Nvme0n1 00:23:18.789 21:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:19.354 Nvme0n1 00:23:19.354 21:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:19.354 21:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.883 21:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:21.883 21:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:21.883 21:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:22.142 21:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:23.078 21:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:23.078 21:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:23.078 21:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.078 21:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:23.337 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.337 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:23.337 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.337 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.595 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.595 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.595 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.595 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.854 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.854 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.854 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.854 21:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.112 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.112 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:24.112 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.112 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.371 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.371 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:24.371 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.371 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.937 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.937 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:24.937 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:24.937 21:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:25.502 21:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:26.434 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:26.434 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:26.434 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.434 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.692 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.692 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:26.692 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.692 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.949 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.949 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.949 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.949 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.207 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.207 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.207 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.207 21:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.466 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.466 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.466 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.466 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.724 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.724 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:27.724 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.724 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.004 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.004 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:28.004 21:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:28.261 21:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:28.518 21:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:29.450 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:29.450 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:29.708 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.708 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.964 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.964 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:29.964 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.965 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.222 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.222 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.222 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.222 21:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.479 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.479 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.479 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.479 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.736 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.736 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.736 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.736 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.994 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.994 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:30.994 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.994 21:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.251 21:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.251 21:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:31.251 21:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:31.509 21:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:31.767 21:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.143 21:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:33.403 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.403 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:33.403 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.403 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.684 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.684 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.684 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.684 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.960 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.960 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:33.960 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.960 21:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:34.224 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.224 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:34.224 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.224 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.481 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.481 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:34.481 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:34.739 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:34.997 21:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:36.369 21:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:36.369 21:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:36.369 21:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.369 21:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.369 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.369 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:36.369 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.369 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.626 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.626 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.627 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.627 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.884 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.884 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.884 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.884 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.156 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.156 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:37.156 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.156 21:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.414 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.414 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:37.414 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.414 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.675 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.675 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:37.675 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:37.933 21:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:38.191 21:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.564 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.822 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.822 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.822 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.822 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.080 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.080 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.080 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.080 21:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.337 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.337 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:40.337 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.338 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.595 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.595 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.595 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.595 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.853 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.853 21:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:41.111 21:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:41.111 21:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:41.368 21:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.935 21:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:42.868 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:42.868 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:42.868 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.868 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.126 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.126 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:43.126 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.126 21:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.385 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.385 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.385 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.385 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.643 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.643 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.643 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.643 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.901 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.901 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.901 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.901 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.159 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.159 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:44.159 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.159 21:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.417 21:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.417 21:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:44.417 21:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:44.675 21:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.933 21:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:45.867 21:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:45.867 21:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:45.867 21:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.867 21:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.433 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.693 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.693 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.693 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.693 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.259 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.259 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:47.259 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.259 21:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.259 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.259 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.259 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.259 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.517 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.517 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:47.517 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:48.082 21:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:48.083 21:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.458 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.459 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.459 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.717 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.717 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.717 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.717 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.975 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.976 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.976 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.976 21:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:50.234 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.234 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.234 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.234 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.492 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.492 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.492 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.492 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.750 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.750 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:50.750 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.317 21:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:51.317 21:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.693 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.952 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.952 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.952 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.952 21:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.210 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.210 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.210 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.210 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.469 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.469 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.469 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.469 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.727 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.727 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:53.728 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.728 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4052007 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4052007 ']' 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4052007 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.986 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4052007 00:23:54.251 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.251 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.251 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4052007' 00:23:54.251 killing process with pid 4052007 00:23:54.251 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4052007 00:23:54.251 21:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4052007 00:23:54.251 { 00:23:54.251 "results": [ 00:23:54.251 { 00:23:54.251 "job": "Nvme0n1", 00:23:54.251 "core_mask": "0x4", 00:23:54.251 "workload": "verify", 00:23:54.251 "status": "terminated", 00:23:54.251 "verify_range": { 00:23:54.251 "start": 0, 00:23:54.251 "length": 16384 00:23:54.251 }, 00:23:54.251 "queue_depth": 128, 00:23:54.251 "io_size": 4096, 00:23:54.251 "runtime": 34.470199, 00:23:54.251 "iops": 7854.117697434819, 00:23:54.251 "mibps": 30.680147255604762, 00:23:54.251 "io_failed": 0, 00:23:54.251 "io_timeout": 0, 00:23:54.251 "avg_latency_us": 16271.316447236317, 00:23:54.251 "min_latency_us": 183.56148148148148, 00:23:54.251 "max_latency_us": 4026531.84 00:23:54.251 } 00:23:54.251 ], 00:23:54.251 "core_count": 1 00:23:54.251 } 00:23:54.252 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4052007 00:23:54.252 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.252 [2024-11-26 21:04:08.591174] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:23:54.252 [2024-11-26 21:04:08.591245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052007 ] 00:23:54.252 [2024-11-26 21:04:08.658149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.252 [2024-11-26 21:04:08.719563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.252 Running I/O for 90 seconds... 00:23:54.252 8367.00 IOPS, 32.68 MiB/s [2024-11-26T20:04:45.190Z] 8413.00 IOPS, 32.86 MiB/s [2024-11-26T20:04:45.190Z] 8425.33 IOPS, 32.91 MiB/s [2024-11-26T20:04:45.190Z] 8440.00 IOPS, 32.97 MiB/s [2024-11-26T20:04:45.190Z] 8418.40 IOPS, 32.88 MiB/s [2024-11-26T20:04:45.190Z] 8412.00 IOPS, 32.86 MiB/s [2024-11-26T20:04:45.190Z] 8399.43 IOPS, 32.81 MiB/s [2024-11-26T20:04:45.190Z] 8417.75 IOPS, 32.88 MiB/s [2024-11-26T20:04:45.190Z] 8408.78 IOPS, 32.85 MiB/s [2024-11-26T20:04:45.190Z] 8405.00 IOPS, 32.83 MiB/s [2024-11-26T20:04:45.190Z] 8398.82 IOPS, 32.81 MiB/s [2024-11-26T20:04:45.190Z] 8396.33 IOPS, 32.80 MiB/s [2024-11-26T20:04:45.190Z] 8403.23 IOPS, 32.83 MiB/s [2024-11-26T20:04:45.190Z] 8403.00 IOPS, 32.82 MiB/s [2024-11-26T20:04:45.190Z] 8391.73 IOPS, 32.78 MiB/s [2024-11-26T20:04:45.190Z] [2024-11-26 21:04:25.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.570395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.570411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.252 [2024-11-26 21:04:25.571817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.571894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.571932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.571970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.571986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.252 [2024-11-26 21:04:25.572221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:54.252 [2024-11-26 21:04:25.572241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.253 [2024-11-26 21:04:25.572498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.572970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.572985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:54.253 [2024-11-26 21:04:25.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.253 [2024-11-26 21:04:25.573533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.573970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.573992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.254 [2024-11-26 21:04:25.574832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.254 [2024-11-26 21:04:25.574847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.574871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.574886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.574912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.574952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.574966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.575912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.255 [2024-11-26 21:04:25.575953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.575978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.255 [2024-11-26 21:04:25.575994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.576020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.576036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.255 [2024-11-26 21:04:25.576061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.255 [2024-11-26 21:04:25.576077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:25.576666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:25.576682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.256 7930.19 IOPS, 30.98 MiB/s [2024-11-26T20:04:45.194Z] 7463.71 IOPS, 29.16 MiB/s [2024-11-26T20:04:45.194Z] 7049.06 IOPS, 27.54 MiB/s [2024-11-26T20:04:45.194Z] 6678.05 IOPS, 26.09 MiB/s [2024-11-26T20:04:45.194Z] 6713.05 IOPS, 26.22 MiB/s [2024-11-26T20:04:45.194Z] 6786.48 IOPS, 26.51 MiB/s [2024-11-26T20:04:45.194Z] 6876.73 IOPS, 26.86 MiB/s [2024-11-26T20:04:45.194Z] 7052.96 IOPS, 27.55 MiB/s [2024-11-26T20:04:45.194Z] 7206.62 IOPS, 28.15 MiB/s [2024-11-26T20:04:45.194Z] 7354.48 IOPS, 28.73 MiB/s [2024-11-26T20:04:45.194Z] 7404.42 IOPS, 28.92 MiB/s [2024-11-26T20:04:45.194Z] 7443.15 IOPS, 29.07 MiB/s [2024-11-26T20:04:45.194Z] 7476.75 IOPS, 29.21 MiB/s [2024-11-26T20:04:45.194Z] 7545.00 IOPS, 29.47 MiB/s [2024-11-26T20:04:45.194Z] 7648.10 IOPS, 29.88 MiB/s [2024-11-26T20:04:45.194Z] 7746.52 IOPS, 30.26 MiB/s [2024-11-26T20:04:45.194Z] [2024-11-26 21:04:42.212938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:42.213025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:42.213312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:42.213357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.256 [2024-11-26 21:04:42.213636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.256 [2024-11-26 21:04:42.213695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.256 [2024-11-26 21:04:42.213733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.257 [2024-11-26 21:04:42.213751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.257 [2024-11-26 21:04:42.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.213811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.257 [2024-11-26 21:04:42.213826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.215972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.215993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.257 [2024-11-26 21:04:42.216509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.257 [2024-11-26 21:04:42.216548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.216987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:54.258 [2024-11-26 21:04:42.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.258 [2024-11-26 21:04:42.217344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:54.258 7818.84 IOPS, 30.54 MiB/s [2024-11-26T20:04:45.196Z] 7835.82 IOPS, 30.61 MiB/s [2024-11-26T20:04:45.196Z] 7850.38 IOPS, 30.67 MiB/s [2024-11-26T20:04:45.196Z] Received shutdown signal, test time was about 34.470998 seconds 00:23:54.258 00:23:54.258 Latency(us) 00:23:54.258 [2024-11-26T20:04:45.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.258 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.258 Verification LBA range: start 0x0 length 0x4000 00:23:54.258 Nvme0n1 : 34.47 7854.12 30.68 0.00 0.00 16271.32 183.56 4026531.84 00:23:54.258 [2024-11-26T20:04:45.196Z] =================================================================================================================== 00:23:54.258 [2024-11-26T20:04:45.196Z] Total : 7854.12 30.68 0.00 0.00 16271.32 183.56 4026531.84 00:23:54.258 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.825 rmmod nvme_tcp 00:23:54.825 rmmod nvme_fabrics 00:23:54.825 rmmod nvme_keyring 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4051722 ']' 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4051722 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4051722 ']' 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4051722 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4051722 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4051722' 00:23:54.825 killing process with pid 4051722 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4051722 00:23:54.825 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4051722 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.085 21:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.988 00:23:56.988 real 0m43.740s 00:23:56.988 user 2m13.909s 00:23:56.988 sys 0m10.539s 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.988 ************************************ 00:23:56.988 END TEST nvmf_host_multipath_status 00:23:56.988 ************************************ 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.988 21:04:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.247 ************************************ 00:23:57.247 START TEST nvmf_discovery_remove_ifc 00:23:57.247 ************************************ 00:23:57.247 21:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:57.247 * Looking for test storage... 00:23:57.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.247 21:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.247 21:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.247 21:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.247 --rc genhtml_branch_coverage=1 00:23:57.247 --rc genhtml_function_coverage=1 00:23:57.247 --rc genhtml_legend=1 00:23:57.247 --rc geninfo_all_blocks=1 00:23:57.247 --rc geninfo_unexecuted_blocks=1 00:23:57.247 00:23:57.247 ' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.247 --rc genhtml_branch_coverage=1 00:23:57.247 --rc genhtml_function_coverage=1 00:23:57.247 --rc genhtml_legend=1 00:23:57.247 --rc geninfo_all_blocks=1 00:23:57.247 --rc geninfo_unexecuted_blocks=1 00:23:57.247 00:23:57.247 ' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.247 --rc genhtml_branch_coverage=1 00:23:57.247 --rc genhtml_function_coverage=1 00:23:57.247 --rc genhtml_legend=1 00:23:57.247 --rc geninfo_all_blocks=1 00:23:57.247 --rc geninfo_unexecuted_blocks=1 00:23:57.247 00:23:57.247 ' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.247 --rc genhtml_branch_coverage=1 00:23:57.247 --rc genhtml_function_coverage=1 00:23:57.247 --rc genhtml_legend=1 00:23:57.247 --rc geninfo_all_blocks=1 00:23:57.247 --rc geninfo_unexecuted_blocks=1 00:23:57.247 00:23:57.247 ' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.247 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.248 21:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.778 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:59.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:59.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:59.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:59.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:23:59.779 00:23:59.779 --- 10.0.0.2 ping statistics --- 00:23:59.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.779 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:59.779 00:23:59.779 --- 10.0.0.1 ping statistics --- 00:23:59.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.779 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.779 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4058480 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4058480 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4058480 ']' 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.780 [2024-11-26 21:04:50.322212] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:23:59.780 [2024-11-26 21:04:50.322301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.780 [2024-11-26 21:04:50.397819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.780 [2024-11-26 21:04:50.456581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.780 [2024-11-26 21:04:50.456641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.780 [2024-11-26 21:04:50.456679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.780 [2024-11-26 21:04:50.456697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.780 [2024-11-26 21:04:50.456707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.780 [2024-11-26 21:04:50.457346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.780 [2024-11-26 21:04:50.617851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.780 [2024-11-26 21:04:50.626075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:59.780 null0 00:23:59.780 [2024-11-26 21:04:50.658005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4058504 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4058504 /tmp/host.sock 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4058504 ']' 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:59.780 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.780 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.039 [2024-11-26 21:04:50.728050] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:24:00.039 [2024-11-26 21:04:50.728134] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058504 ] 00:24:00.039 [2024-11-26 21:04:50.798406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.039 [2024-11-26 21:04:50.860219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.039 21:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.297 21:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.297 21:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:00.297 21:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.297 21:04:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.231 [2024-11-26 21:04:52.126463] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:01.231 [2024-11-26 21:04:52.126502] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:01.231 [2024-11-26 21:04:52.126529] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:01.490 [2024-11-26 21:04:52.255979] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:01.749 [2024-11-26 21:04:52.435305] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:01.749 [2024-11-26 21:04:52.436479] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xda5fd0:1 started. 00:24:01.749 [2024-11-26 21:04:52.438232] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:01.749 [2024-11-26 21:04:52.438290] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:01.749 [2024-11-26 21:04:52.438332] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:01.749 [2024-11-26 21:04:52.438355] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:01.749 [2024-11-26 21:04:52.438387] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.749 [2024-11-26 21:04:52.444554] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xda5fd0 was disconnected and freed. delete nvme_qpair. 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:01.749 21:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.682 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.940 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:02.940 21:04:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.874 21:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:04.860 21:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.825 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.083 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:06.083 21:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.015 21:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.015 [2024-11-26 21:04:57.879607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:07.015 [2024-11-26 21:04:57.879725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.015 [2024-11-26 21:04:57.879767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.015 [2024-11-26 21:04:57.879789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.015 [2024-11-26 21:04:57.879802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.015 [2024-11-26 21:04:57.879816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.015 [2024-11-26 21:04:57.879829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.015 [2024-11-26 21:04:57.879843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.015 [2024-11-26 21:04:57.879856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.015 [2024-11-26 21:04:57.879885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.015 [2024-11-26 21:04:57.879898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.015 [2024-11-26 21:04:57.879911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82860 is same with the state(6) to be set 00:24:07.015 [2024-11-26 21:04:57.889625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd82860 (9): Bad file descriptor 00:24:07.015 [2024-11-26 21:04:57.899672] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:07.015 [2024-11-26 21:04:57.899717] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:07.015 [2024-11-26 21:04:57.899729] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:07.015 [2024-11-26 21:04:57.899749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:07.015 [2024-11-26 21:04:57.899792] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:07.947 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.947 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.947 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.948 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.948 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.948 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.948 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.205 [2024-11-26 21:04:58.945733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:08.205 [2024-11-26 21:04:58.945808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd82860 with addr=10.0.0.2, port=4420 00:24:08.205 [2024-11-26 21:04:58.945839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd82860 is same with the state(6) to be set 00:24:08.205 [2024-11-26 21:04:58.945899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd82860 (9): Bad file descriptor 00:24:08.205 [2024-11-26 21:04:58.946408] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:08.205 [2024-11-26 21:04:58.946472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:08.205 [2024-11-26 21:04:58.946492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:08.205 [2024-11-26 21:04:58.946512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:08.205 [2024-11-26 21:04:58.946528] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:08.205 [2024-11-26 21:04:58.946539] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:08.205 [2024-11-26 21:04:58.946548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:08.205 [2024-11-26 21:04:58.946564] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:08.205 [2024-11-26 21:04:58.946574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.205 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.205 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.205 21:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:09.139 [2024-11-26 21:04:59.949081] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:09.139 [2024-11-26 21:04:59.949143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:09.139 [2024-11-26 21:04:59.949195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:09.139 [2024-11-26 21:04:59.949210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:09.139 [2024-11-26 21:04:59.949226] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:09.139 [2024-11-26 21:04:59.949239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:09.139 [2024-11-26 21:04:59.949250] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:09.139 [2024-11-26 21:04:59.949258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:09.139 [2024-11-26 21:04:59.949304] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:09.139 [2024-11-26 21:04:59.949375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.139 [2024-11-26 21:04:59.949399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.139 [2024-11-26 21:04:59.949421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.139 [2024-11-26 21:04:59.949434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.139 [2024-11-26 21:04:59.949448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.139 [2024-11-26 21:04:59.949461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.139 [2024-11-26 21:04:59.949475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.139 [2024-11-26 21:04:59.949488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.139 [2024-11-26 21:04:59.949512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.139 [2024-11-26 21:04:59.949525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.139 [2024-11-26 21:04:59.949541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:09.139 [2024-11-26 21:04:59.949602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd71b50 (9): Bad file descriptor 00:24:09.139 [2024-11-26 21:04:59.950601] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:09.139 [2024-11-26 21:04:59.950624] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.139 21:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.139 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.396 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:09.396 21:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:10.326 21:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.256 [2024-11-26 21:05:02.002443] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.256 [2024-11-26 21:05:02.002474] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.256 [2024-11-26 21:05:02.002501] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.256 [2024-11-26 21:05:02.129949] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:11.256 21:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.513 [2024-11-26 21:05:02.313090] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:11.513 [2024-11-26 21:05:02.313955] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd5bb40:1 started. 00:24:11.513 [2024-11-26 21:05:02.315525] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:11.513 [2024-11-26 21:05:02.315579] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:11.513 [2024-11-26 21:05:02.315620] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:11.513 [2024-11-26 21:05:02.315645] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:11.513 [2024-11-26 21:05:02.315659] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.513 [2024-11-26 21:05:02.320711] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd5bb40 was disconnected and freed. delete nvme_qpair. 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4058504 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4058504 ']' 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4058504 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058504 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058504' 00:24:12.443 killing process with pid 4058504 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4058504 00:24:12.443 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4058504 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.701 rmmod nvme_tcp 00:24:12.701 rmmod nvme_fabrics 00:24:12.701 rmmod nvme_keyring 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4058480 ']' 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4058480 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4058480 ']' 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4058480 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058480 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058480' 00:24:12.701 killing process with pid 4058480 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4058480 00:24:12.701 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4058480 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.959 21:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.497 00:24:15.497 real 0m17.916s 00:24:15.497 user 0m26.040s 00:24:15.497 sys 0m3.079s 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.497 ************************************ 00:24:15.497 END TEST nvmf_discovery_remove_ifc 00:24:15.497 ************************************ 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.497 ************************************ 00:24:15.497 START TEST nvmf_identify_kernel_target 00:24:15.497 ************************************ 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:15.497 * Looking for test storage... 00:24:15.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:15.497 21:05:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.497 --rc genhtml_branch_coverage=1 00:24:15.497 --rc genhtml_function_coverage=1 00:24:15.497 --rc genhtml_legend=1 00:24:15.497 --rc geninfo_all_blocks=1 00:24:15.497 --rc geninfo_unexecuted_blocks=1 00:24:15.497 00:24:15.497 ' 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.497 --rc genhtml_branch_coverage=1 00:24:15.497 --rc genhtml_function_coverage=1 00:24:15.497 --rc genhtml_legend=1 00:24:15.497 --rc geninfo_all_blocks=1 00:24:15.497 --rc geninfo_unexecuted_blocks=1 00:24:15.497 00:24:15.497 ' 00:24:15.497 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.497 --rc genhtml_branch_coverage=1 00:24:15.498 --rc genhtml_function_coverage=1 00:24:15.498 --rc genhtml_legend=1 00:24:15.498 --rc geninfo_all_blocks=1 00:24:15.498 --rc geninfo_unexecuted_blocks=1 00:24:15.498 00:24:15.498 ' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:15.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.498 --rc genhtml_branch_coverage=1 00:24:15.498 --rc genhtml_function_coverage=1 00:24:15.498 --rc genhtml_legend=1 00:24:15.498 --rc geninfo_all_blocks=1 00:24:15.498 --rc geninfo_unexecuted_blocks=1 00:24:15.498 00:24:15.498 ' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.498 21:05:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.402 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:24:17.403 00:24:17.403 --- 10.0.0.2 ping statistics --- 00:24:17.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.403 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:17.403 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:17.403 00:24:17.404 --- 10.0.0.1 ping statistics --- 00:24:17.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.404 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:17.404 21:05:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:18.780 Waiting for block devices as requested 00:24:18.780 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:18.780 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:18.780 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:18.780 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:19.038 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:19.038 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:19.038 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:19.038 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:19.298 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:19.298 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:19.298 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:19.298 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:19.557 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:19.557 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:19.557 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:19.815 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:19.815 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:19.815 No valid GPT data, bailing 00:24:19.815 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:20.075 00:24:20.075 Discovery Log Number of Records 2, Generation counter 2 00:24:20.075 =====Discovery Log Entry 0====== 00:24:20.075 trtype: tcp 00:24:20.075 adrfam: ipv4 00:24:20.075 subtype: current discovery subsystem 00:24:20.075 treq: not specified, sq flow control disable supported 00:24:20.075 portid: 1 00:24:20.075 trsvcid: 4420 00:24:20.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:20.075 traddr: 10.0.0.1 00:24:20.075 eflags: none 00:24:20.075 sectype: none 00:24:20.075 =====Discovery Log Entry 1====== 00:24:20.075 trtype: tcp 00:24:20.075 adrfam: ipv4 00:24:20.075 subtype: nvme subsystem 00:24:20.075 treq: not specified, sq flow control disable supported 00:24:20.075 portid: 1 00:24:20.075 trsvcid: 4420 00:24:20.075 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:20.075 traddr: 10.0.0.1 00:24:20.075 eflags: none 00:24:20.075 sectype: none 00:24:20.075 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:20.075 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:20.075 ===================================================== 00:24:20.075 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:20.075 ===================================================== 00:24:20.075 Controller Capabilities/Features 00:24:20.075 ================================ 00:24:20.075 Vendor ID: 0000 00:24:20.075 Subsystem Vendor ID: 0000 00:24:20.075 Serial Number: f477e86384416b5b9d3a 00:24:20.075 Model Number: Linux 00:24:20.075 Firmware Version: 6.8.9-20 00:24:20.075 Recommended Arb Burst: 0 00:24:20.075 IEEE OUI Identifier: 00 00 00 00:24:20.075 Multi-path I/O 00:24:20.075 May have multiple subsystem ports: No 00:24:20.075 May have multiple controllers: No 00:24:20.075 Associated with SR-IOV VF: No 00:24:20.075 Max Data Transfer Size: Unlimited 00:24:20.075 Max Number of Namespaces: 0 00:24:20.075 Max Number of I/O Queues: 1024 00:24:20.075 NVMe Specification Version (VS): 1.3 00:24:20.075 NVMe Specification Version (Identify): 1.3 00:24:20.075 Maximum Queue Entries: 1024 00:24:20.075 Contiguous Queues Required: No 00:24:20.075 Arbitration Mechanisms Supported 00:24:20.075 Weighted Round Robin: Not Supported 00:24:20.075 Vendor Specific: Not Supported 00:24:20.075 Reset Timeout: 7500 ms 00:24:20.075 Doorbell Stride: 4 bytes 00:24:20.075 NVM Subsystem Reset: Not Supported 00:24:20.075 Command Sets Supported 00:24:20.075 NVM Command Set: Supported 00:24:20.075 Boot Partition: Not Supported 00:24:20.075 Memory Page Size Minimum: 4096 bytes 00:24:20.075 Memory Page Size Maximum: 4096 bytes 00:24:20.075 Persistent Memory Region: Not Supported 00:24:20.075 Optional Asynchronous Events Supported 00:24:20.075 Namespace Attribute Notices: Not Supported 00:24:20.075 Firmware Activation Notices: Not Supported 00:24:20.075 ANA Change Notices: Not Supported 00:24:20.075 PLE Aggregate Log Change Notices: Not Supported 00:24:20.075 LBA Status Info Alert Notices: Not Supported 00:24:20.075 EGE Aggregate Log Change Notices: Not Supported 00:24:20.075 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.075 Zone Descriptor Change Notices: Not Supported 00:24:20.075 Discovery Log Change Notices: Supported 00:24:20.075 Controller Attributes 00:24:20.075 128-bit Host Identifier: Not Supported 00:24:20.075 Non-Operational Permissive Mode: Not Supported 00:24:20.075 NVM Sets: Not Supported 00:24:20.075 Read Recovery Levels: Not Supported 00:24:20.075 Endurance Groups: Not Supported 00:24:20.075 Predictable Latency Mode: Not Supported 00:24:20.075 Traffic Based Keep ALive: Not Supported 00:24:20.075 Namespace Granularity: Not Supported 00:24:20.075 SQ Associations: Not Supported 00:24:20.075 UUID List: Not Supported 00:24:20.075 Multi-Domain Subsystem: Not Supported 00:24:20.075 Fixed Capacity Management: Not Supported 00:24:20.075 Variable Capacity Management: Not Supported 00:24:20.075 Delete Endurance Group: Not Supported 00:24:20.075 Delete NVM Set: Not Supported 00:24:20.075 Extended LBA Formats Supported: Not Supported 00:24:20.075 Flexible Data Placement Supported: Not Supported 00:24:20.075 00:24:20.075 Controller Memory Buffer Support 00:24:20.075 ================================ 00:24:20.075 Supported: No 00:24:20.075 00:24:20.075 Persistent Memory Region Support 00:24:20.075 ================================ 00:24:20.075 Supported: No 00:24:20.075 00:24:20.075 Admin Command Set Attributes 00:24:20.075 ============================ 00:24:20.075 Security Send/Receive: Not Supported 00:24:20.075 Format NVM: Not Supported 00:24:20.076 Firmware Activate/Download: Not Supported 00:24:20.076 Namespace Management: Not Supported 00:24:20.076 Device Self-Test: Not Supported 00:24:20.076 Directives: Not Supported 00:24:20.076 NVMe-MI: Not Supported 00:24:20.076 Virtualization Management: Not Supported 00:24:20.076 Doorbell Buffer Config: Not Supported 00:24:20.076 Get LBA Status Capability: Not Supported 00:24:20.076 Command & Feature Lockdown Capability: Not Supported 00:24:20.076 Abort Command Limit: 1 00:24:20.076 Async Event Request Limit: 1 00:24:20.076 Number of Firmware Slots: N/A 00:24:20.076 Firmware Slot 1 Read-Only: N/A 00:24:20.076 Firmware Activation Without Reset: N/A 00:24:20.076 Multiple Update Detection Support: N/A 00:24:20.076 Firmware Update Granularity: No Information Provided 00:24:20.076 Per-Namespace SMART Log: No 00:24:20.076 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.076 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:20.076 Command Effects Log Page: Not Supported 00:24:20.076 Get Log Page Extended Data: Supported 00:24:20.076 Telemetry Log Pages: Not Supported 00:24:20.076 Persistent Event Log Pages: Not Supported 00:24:20.076 Supported Log Pages Log Page: May Support 00:24:20.076 Commands Supported & Effects Log Page: Not Supported 00:24:20.076 Feature Identifiers & Effects Log Page:May Support 00:24:20.076 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.076 Data Area 4 for Telemetry Log: Not Supported 00:24:20.076 Error Log Page Entries Supported: 1 00:24:20.076 Keep Alive: Not Supported 00:24:20.076 00:24:20.076 NVM Command Set Attributes 00:24:20.076 ========================== 00:24:20.076 Submission Queue Entry Size 00:24:20.076 Max: 1 00:24:20.076 Min: 1 00:24:20.076 Completion Queue Entry Size 00:24:20.076 Max: 1 00:24:20.076 Min: 1 00:24:20.076 Number of Namespaces: 0 00:24:20.076 Compare Command: Not Supported 00:24:20.076 Write Uncorrectable Command: Not Supported 00:24:20.076 Dataset Management Command: Not Supported 00:24:20.076 Write Zeroes Command: Not Supported 00:24:20.076 Set Features Save Field: Not Supported 00:24:20.076 Reservations: Not Supported 00:24:20.076 Timestamp: Not Supported 00:24:20.076 Copy: Not Supported 00:24:20.076 Volatile Write Cache: Not Present 00:24:20.076 Atomic Write Unit (Normal): 1 00:24:20.076 Atomic Write Unit (PFail): 1 00:24:20.076 Atomic Compare & Write Unit: 1 00:24:20.076 Fused Compare & Write: Not Supported 00:24:20.076 Scatter-Gather List 00:24:20.076 SGL Command Set: Supported 00:24:20.076 SGL Keyed: Not Supported 00:24:20.076 SGL Bit Bucket Descriptor: Not Supported 00:24:20.076 SGL Metadata Pointer: Not Supported 00:24:20.076 Oversized SGL: Not Supported 00:24:20.076 SGL Metadata Address: Not Supported 00:24:20.076 SGL Offset: Supported 00:24:20.076 Transport SGL Data Block: Not Supported 00:24:20.076 Replay Protected Memory Block: Not Supported 00:24:20.076 00:24:20.076 Firmware Slot Information 00:24:20.076 ========================= 00:24:20.076 Active slot: 0 00:24:20.076 00:24:20.076 00:24:20.076 Error Log 00:24:20.076 ========= 00:24:20.076 00:24:20.076 Active Namespaces 00:24:20.076 ================= 00:24:20.076 Discovery Log Page 00:24:20.076 ================== 00:24:20.076 Generation Counter: 2 00:24:20.076 Number of Records: 2 00:24:20.076 Record Format: 0 00:24:20.076 00:24:20.076 Discovery Log Entry 0 00:24:20.076 ---------------------- 00:24:20.076 Transport Type: 3 (TCP) 00:24:20.076 Address Family: 1 (IPv4) 00:24:20.076 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:20.076 Entry Flags: 00:24:20.076 Duplicate Returned Information: 0 00:24:20.076 Explicit Persistent Connection Support for Discovery: 0 00:24:20.076 Transport Requirements: 00:24:20.076 Secure Channel: Not Specified 00:24:20.076 Port ID: 1 (0x0001) 00:24:20.076 Controller ID: 65535 (0xffff) 00:24:20.076 Admin Max SQ Size: 32 00:24:20.076 Transport Service Identifier: 4420 00:24:20.076 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:20.076 Transport Address: 10.0.0.1 00:24:20.076 Discovery Log Entry 1 00:24:20.076 ---------------------- 00:24:20.076 Transport Type: 3 (TCP) 00:24:20.076 Address Family: 1 (IPv4) 00:24:20.076 Subsystem Type: 2 (NVM Subsystem) 00:24:20.076 Entry Flags: 00:24:20.076 Duplicate Returned Information: 0 00:24:20.076 Explicit Persistent Connection Support for Discovery: 0 00:24:20.076 Transport Requirements: 00:24:20.076 Secure Channel: Not Specified 00:24:20.076 Port ID: 1 (0x0001) 00:24:20.076 Controller ID: 65535 (0xffff) 00:24:20.076 Admin Max SQ Size: 32 00:24:20.076 Transport Service Identifier: 4420 00:24:20.076 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:20.076 Transport Address: 10.0.0.1 00:24:20.076 21:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:20.336 get_feature(0x01) failed 00:24:20.336 get_feature(0x02) failed 00:24:20.336 get_feature(0x04) failed 00:24:20.336 ===================================================== 00:24:20.336 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:20.336 ===================================================== 00:24:20.336 Controller Capabilities/Features 00:24:20.336 ================================ 00:24:20.336 Vendor ID: 0000 00:24:20.336 Subsystem Vendor ID: 0000 00:24:20.336 Serial Number: 5ab85bfbe174185b649a 00:24:20.336 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:20.336 Firmware Version: 6.8.9-20 00:24:20.336 Recommended Arb Burst: 6 00:24:20.336 IEEE OUI Identifier: 00 00 00 00:24:20.336 Multi-path I/O 00:24:20.336 May have multiple subsystem ports: Yes 00:24:20.336 May have multiple controllers: Yes 00:24:20.336 Associated with SR-IOV VF: No 00:24:20.336 Max Data Transfer Size: Unlimited 00:24:20.336 Max Number of Namespaces: 1024 00:24:20.336 Max Number of I/O Queues: 128 00:24:20.336 NVMe Specification Version (VS): 1.3 00:24:20.336 NVMe Specification Version (Identify): 1.3 00:24:20.336 Maximum Queue Entries: 1024 00:24:20.336 Contiguous Queues Required: No 00:24:20.336 Arbitration Mechanisms Supported 00:24:20.336 Weighted Round Robin: Not Supported 00:24:20.336 Vendor Specific: Not Supported 00:24:20.336 Reset Timeout: 7500 ms 00:24:20.336 Doorbell Stride: 4 bytes 00:24:20.336 NVM Subsystem Reset: Not Supported 00:24:20.336 Command Sets Supported 00:24:20.336 NVM Command Set: Supported 00:24:20.336 Boot Partition: Not Supported 00:24:20.336 Memory Page Size Minimum: 4096 bytes 00:24:20.336 Memory Page Size Maximum: 4096 bytes 00:24:20.336 Persistent Memory Region: Not Supported 00:24:20.336 Optional Asynchronous Events Supported 00:24:20.336 Namespace Attribute Notices: Supported 00:24:20.336 Firmware Activation Notices: Not Supported 00:24:20.336 ANA Change Notices: Supported 00:24:20.336 PLE Aggregate Log Change Notices: Not Supported 00:24:20.336 LBA Status Info Alert Notices: Not Supported 00:24:20.336 EGE Aggregate Log Change Notices: Not Supported 00:24:20.336 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.336 Zone Descriptor Change Notices: Not Supported 00:24:20.336 Discovery Log Change Notices: Not Supported 00:24:20.336 Controller Attributes 00:24:20.336 128-bit Host Identifier: Supported 00:24:20.336 Non-Operational Permissive Mode: Not Supported 00:24:20.336 NVM Sets: Not Supported 00:24:20.336 Read Recovery Levels: Not Supported 00:24:20.336 Endurance Groups: Not Supported 00:24:20.336 Predictable Latency Mode: Not Supported 00:24:20.336 Traffic Based Keep ALive: Supported 00:24:20.336 Namespace Granularity: Not Supported 00:24:20.336 SQ Associations: Not Supported 00:24:20.336 UUID List: Not Supported 00:24:20.336 Multi-Domain Subsystem: Not Supported 00:24:20.336 Fixed Capacity Management: Not Supported 00:24:20.336 Variable Capacity Management: Not Supported 00:24:20.336 Delete Endurance Group: Not Supported 00:24:20.336 Delete NVM Set: Not Supported 00:24:20.336 Extended LBA Formats Supported: Not Supported 00:24:20.336 Flexible Data Placement Supported: Not Supported 00:24:20.336 00:24:20.336 Controller Memory Buffer Support 00:24:20.336 ================================ 00:24:20.336 Supported: No 00:24:20.336 00:24:20.336 Persistent Memory Region Support 00:24:20.336 ================================ 00:24:20.336 Supported: No 00:24:20.336 00:24:20.336 Admin Command Set Attributes 00:24:20.336 ============================ 00:24:20.336 Security Send/Receive: Not Supported 00:24:20.336 Format NVM: Not Supported 00:24:20.336 Firmware Activate/Download: Not Supported 00:24:20.336 Namespace Management: Not Supported 00:24:20.336 Device Self-Test: Not Supported 00:24:20.336 Directives: Not Supported 00:24:20.336 NVMe-MI: Not Supported 00:24:20.336 Virtualization Management: Not Supported 00:24:20.336 Doorbell Buffer Config: Not Supported 00:24:20.336 Get LBA Status Capability: Not Supported 00:24:20.336 Command & Feature Lockdown Capability: Not Supported 00:24:20.336 Abort Command Limit: 4 00:24:20.336 Async Event Request Limit: 4 00:24:20.336 Number of Firmware Slots: N/A 00:24:20.336 Firmware Slot 1 Read-Only: N/A 00:24:20.336 Firmware Activation Without Reset: N/A 00:24:20.336 Multiple Update Detection Support: N/A 00:24:20.336 Firmware Update Granularity: No Information Provided 00:24:20.336 Per-Namespace SMART Log: Yes 00:24:20.336 Asymmetric Namespace Access Log Page: Supported 00:24:20.336 ANA Transition Time : 10 sec 00:24:20.336 00:24:20.336 Asymmetric Namespace Access Capabilities 00:24:20.336 ANA Optimized State : Supported 00:24:20.336 ANA Non-Optimized State : Supported 00:24:20.336 ANA Inaccessible State : Supported 00:24:20.336 ANA Persistent Loss State : Supported 00:24:20.336 ANA Change State : Supported 00:24:20.336 ANAGRPID is not changed : No 00:24:20.336 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:20.336 00:24:20.336 ANA Group Identifier Maximum : 128 00:24:20.336 Number of ANA Group Identifiers : 128 00:24:20.336 Max Number of Allowed Namespaces : 1024 00:24:20.336 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:20.336 Command Effects Log Page: Supported 00:24:20.336 Get Log Page Extended Data: Supported 00:24:20.336 Telemetry Log Pages: Not Supported 00:24:20.336 Persistent Event Log Pages: Not Supported 00:24:20.336 Supported Log Pages Log Page: May Support 00:24:20.336 Commands Supported & Effects Log Page: Not Supported 00:24:20.336 Feature Identifiers & Effects Log Page:May Support 00:24:20.336 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.336 Data Area 4 for Telemetry Log: Not Supported 00:24:20.336 Error Log Page Entries Supported: 128 00:24:20.336 Keep Alive: Supported 00:24:20.336 Keep Alive Granularity: 1000 ms 00:24:20.336 00:24:20.336 NVM Command Set Attributes 00:24:20.336 ========================== 00:24:20.336 Submission Queue Entry Size 00:24:20.336 Max: 64 00:24:20.336 Min: 64 00:24:20.336 Completion Queue Entry Size 00:24:20.336 Max: 16 00:24:20.336 Min: 16 00:24:20.336 Number of Namespaces: 1024 00:24:20.336 Compare Command: Not Supported 00:24:20.336 Write Uncorrectable Command: Not Supported 00:24:20.336 Dataset Management Command: Supported 00:24:20.336 Write Zeroes Command: Supported 00:24:20.336 Set Features Save Field: Not Supported 00:24:20.336 Reservations: Not Supported 00:24:20.336 Timestamp: Not Supported 00:24:20.336 Copy: Not Supported 00:24:20.336 Volatile Write Cache: Present 00:24:20.336 Atomic Write Unit (Normal): 1 00:24:20.336 Atomic Write Unit (PFail): 1 00:24:20.336 Atomic Compare & Write Unit: 1 00:24:20.336 Fused Compare & Write: Not Supported 00:24:20.336 Scatter-Gather List 00:24:20.336 SGL Command Set: Supported 00:24:20.336 SGL Keyed: Not Supported 00:24:20.336 SGL Bit Bucket Descriptor: Not Supported 00:24:20.336 SGL Metadata Pointer: Not Supported 00:24:20.336 Oversized SGL: Not Supported 00:24:20.336 SGL Metadata Address: Not Supported 00:24:20.336 SGL Offset: Supported 00:24:20.336 Transport SGL Data Block: Not Supported 00:24:20.336 Replay Protected Memory Block: Not Supported 00:24:20.336 00:24:20.336 Firmware Slot Information 00:24:20.336 ========================= 00:24:20.336 Active slot: 0 00:24:20.336 00:24:20.336 Asymmetric Namespace Access 00:24:20.336 =========================== 00:24:20.336 Change Count : 0 00:24:20.336 Number of ANA Group Descriptors : 1 00:24:20.336 ANA Group Descriptor : 0 00:24:20.336 ANA Group ID : 1 00:24:20.336 Number of NSID Values : 1 00:24:20.336 Change Count : 0 00:24:20.336 ANA State : 1 00:24:20.336 Namespace Identifier : 1 00:24:20.336 00:24:20.336 Commands Supported and Effects 00:24:20.336 ============================== 00:24:20.336 Admin Commands 00:24:20.337 -------------- 00:24:20.337 Get Log Page (02h): Supported 00:24:20.337 Identify (06h): Supported 00:24:20.337 Abort (08h): Supported 00:24:20.337 Set Features (09h): Supported 00:24:20.337 Get Features (0Ah): Supported 00:24:20.337 Asynchronous Event Request (0Ch): Supported 00:24:20.337 Keep Alive (18h): Supported 00:24:20.337 I/O Commands 00:24:20.337 ------------ 00:24:20.337 Flush (00h): Supported 00:24:20.337 Write (01h): Supported LBA-Change 00:24:20.337 Read (02h): Supported 00:24:20.337 Write Zeroes (08h): Supported LBA-Change 00:24:20.337 Dataset Management (09h): Supported 00:24:20.337 00:24:20.337 Error Log 00:24:20.337 ========= 00:24:20.337 Entry: 0 00:24:20.337 Error Count: 0x3 00:24:20.337 Submission Queue Id: 0x0 00:24:20.337 Command Id: 0x5 00:24:20.337 Phase Bit: 0 00:24:20.337 Status Code: 0x2 00:24:20.337 Status Code Type: 0x0 00:24:20.337 Do Not Retry: 1 00:24:20.337 Error Location: 0x28 00:24:20.337 LBA: 0x0 00:24:20.337 Namespace: 0x0 00:24:20.337 Vendor Log Page: 0x0 00:24:20.337 ----------- 00:24:20.337 Entry: 1 00:24:20.337 Error Count: 0x2 00:24:20.337 Submission Queue Id: 0x0 00:24:20.337 Command Id: 0x5 00:24:20.337 Phase Bit: 0 00:24:20.337 Status Code: 0x2 00:24:20.337 Status Code Type: 0x0 00:24:20.337 Do Not Retry: 1 00:24:20.337 Error Location: 0x28 00:24:20.337 LBA: 0x0 00:24:20.337 Namespace: 0x0 00:24:20.337 Vendor Log Page: 0x0 00:24:20.337 ----------- 00:24:20.337 Entry: 2 00:24:20.337 Error Count: 0x1 00:24:20.337 Submission Queue Id: 0x0 00:24:20.337 Command Id: 0x4 00:24:20.337 Phase Bit: 0 00:24:20.337 Status Code: 0x2 00:24:20.337 Status Code Type: 0x0 00:24:20.337 Do Not Retry: 1 00:24:20.337 Error Location: 0x28 00:24:20.337 LBA: 0x0 00:24:20.337 Namespace: 0x0 00:24:20.337 Vendor Log Page: 0x0 00:24:20.337 00:24:20.337 Number of Queues 00:24:20.337 ================ 00:24:20.337 Number of I/O Submission Queues: 128 00:24:20.337 Number of I/O Completion Queues: 128 00:24:20.337 00:24:20.337 ZNS Specific Controller Data 00:24:20.337 ============================ 00:24:20.337 Zone Append Size Limit: 0 00:24:20.337 00:24:20.337 00:24:20.337 Active Namespaces 00:24:20.337 ================= 00:24:20.337 get_feature(0x05) failed 00:24:20.337 Namespace ID:1 00:24:20.337 Command Set Identifier: NVM (00h) 00:24:20.337 Deallocate: Supported 00:24:20.337 Deallocated/Unwritten Error: Not Supported 00:24:20.337 Deallocated Read Value: Unknown 00:24:20.337 Deallocate in Write Zeroes: Not Supported 00:24:20.337 Deallocated Guard Field: 0xFFFF 00:24:20.337 Flush: Supported 00:24:20.337 Reservation: Not Supported 00:24:20.337 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.337 Size (in LBAs): 1953525168 (931GiB) 00:24:20.337 Capacity (in LBAs): 1953525168 (931GiB) 00:24:20.337 Utilization (in LBAs): 1953525168 (931GiB) 00:24:20.337 UUID: 5b95f823-227f-41ea-b3da-a19bdd24dea5 00:24:20.337 Thin Provisioning: Not Supported 00:24:20.337 Per-NS Atomic Units: Yes 00:24:20.337 Atomic Boundary Size (Normal): 0 00:24:20.337 Atomic Boundary Size (PFail): 0 00:24:20.337 Atomic Boundary Offset: 0 00:24:20.337 NGUID/EUI64 Never Reused: No 00:24:20.337 ANA group ID: 1 00:24:20.337 Namespace Write Protected: No 00:24:20.337 Number of LBA Formats: 1 00:24:20.337 Current LBA Format: LBA Format #00 00:24:20.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.337 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.337 rmmod nvme_tcp 00:24:20.337 rmmod nvme_fabrics 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.337 21:05:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.237 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.237 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:22.237 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:22.237 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:22.496 21:05:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:23.431 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:23.431 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:23.431 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:23.689 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:23.689 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:23.689 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:23.689 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:23.689 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:23.689 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:24.624 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:24.624 00:24:24.624 real 0m9.613s 00:24:24.624 user 0m2.088s 00:24:24.624 sys 0m3.534s 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.624 ************************************ 00:24:24.624 END TEST nvmf_identify_kernel_target 00:24:24.624 ************************************ 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.624 21:05:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.882 ************************************ 00:24:24.882 START TEST nvmf_auth_host 00:24:24.882 ************************************ 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:24.882 * Looking for test storage... 00:24:24.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.882 --rc genhtml_branch_coverage=1 00:24:24.882 --rc genhtml_function_coverage=1 00:24:24.882 --rc genhtml_legend=1 00:24:24.882 --rc geninfo_all_blocks=1 00:24:24.882 --rc geninfo_unexecuted_blocks=1 00:24:24.882 00:24:24.882 ' 00:24:24.882 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.882 --rc genhtml_branch_coverage=1 00:24:24.882 --rc genhtml_function_coverage=1 00:24:24.882 --rc genhtml_legend=1 00:24:24.882 --rc geninfo_all_blocks=1 00:24:24.883 --rc geninfo_unexecuted_blocks=1 00:24:24.883 00:24:24.883 ' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.883 --rc genhtml_branch_coverage=1 00:24:24.883 --rc genhtml_function_coverage=1 00:24:24.883 --rc genhtml_legend=1 00:24:24.883 --rc geninfo_all_blocks=1 00:24:24.883 --rc geninfo_unexecuted_blocks=1 00:24:24.883 00:24:24.883 ' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.883 --rc genhtml_branch_coverage=1 00:24:24.883 --rc genhtml_function_coverage=1 00:24:24.883 --rc genhtml_legend=1 00:24:24.883 --rc geninfo_all_blocks=1 00:24:24.883 --rc geninfo_unexecuted_blocks=1 00:24:24.883 00:24:24.883 ' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.883 21:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:24:27.415 00:24:27.415 --- 10.0.0.2 ping statistics --- 00:24:27.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.415 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:27.415 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:24:27.415 00:24:27.416 --- 10.0.0.1 ping statistics --- 00:24:27.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.416 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4065714 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4065714 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4065714 ']' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.416 21:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8c9c1969d0b06c0ba37012941f15c88f 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tLG 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8c9c1969d0b06c0ba37012941f15c88f 0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8c9c1969d0b06c0ba37012941f15c88f 0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8c9c1969d0b06c0ba37012941f15c88f 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tLG 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tLG 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tLG 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b46ffc1e1ed7c3048dedbd5d530c9ba77b2d30c90d88b1711e526605e1983f5a 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rre 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b46ffc1e1ed7c3048dedbd5d530c9ba77b2d30c90d88b1711e526605e1983f5a 3 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b46ffc1e1ed7c3048dedbd5d530c9ba77b2d30c90d88b1711e526605e1983f5a 3 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b46ffc1e1ed7c3048dedbd5d530c9ba77b2d30c90d88b1711e526605e1983f5a 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rre 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rre 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rre 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c01ec6eecd5dd54cdd26aca5c1d393a6256a005d9c445a5a 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lLx 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c01ec6eecd5dd54cdd26aca5c1d393a6256a005d9c445a5a 0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c01ec6eecd5dd54cdd26aca5c1d393a6256a005d9c445a5a 0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c01ec6eecd5dd54cdd26aca5c1d393a6256a005d9c445a5a 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:27.416 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lLx 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lLx 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lLx 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3baa93f72d13432972626c18a205f7c6349fdd762dd77d05 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3tF 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3baa93f72d13432972626c18a205f7c6349fdd762dd77d05 2 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3baa93f72d13432972626c18a205f7c6349fdd762dd77d05 2 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3baa93f72d13432972626c18a205f7c6349fdd762dd77d05 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3tF 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3tF 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3tF 00:24:27.675 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72adcb89ab0ba1e7a070f2f20bcef6bd 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vqY 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72adcb89ab0ba1e7a070f2f20bcef6bd 1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72adcb89ab0ba1e7a070f2f20bcef6bd 1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72adcb89ab0ba1e7a070f2f20bcef6bd 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vqY 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vqY 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vqY 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3e457ed9debe343c9eed1dfee78ffaf6 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RlZ 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3e457ed9debe343c9eed1dfee78ffaf6 1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3e457ed9debe343c9eed1dfee78ffaf6 1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3e457ed9debe343c9eed1dfee78ffaf6 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RlZ 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RlZ 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.RlZ 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e1ee5de4232483629e86f94ddaf9fdd2fa1a3109a52935f9 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Lxl 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e1ee5de4232483629e86f94ddaf9fdd2fa1a3109a52935f9 2 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e1ee5de4232483629e86f94ddaf9fdd2fa1a3109a52935f9 2 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e1ee5de4232483629e86f94ddaf9fdd2fa1a3109a52935f9 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Lxl 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Lxl 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Lxl 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3db20696ca6e228903ec5ab8f83d67ea 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HPg 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3db20696ca6e228903ec5ab8f83d67ea 0 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3db20696ca6e228903ec5ab8f83d67ea 0 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3db20696ca6e228903ec5ab8f83d67ea 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HPg 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HPg 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.HPg 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=41f0db46100e16b2ca21f5663355f1fd24769e2b3e7100a2a245713894c92381 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:27.676 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nJc 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 41f0db46100e16b2ca21f5663355f1fd24769e2b3e7100a2a245713894c92381 3 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 41f0db46100e16b2ca21f5663355f1fd24769e2b3e7100a2a245713894c92381 3 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=41f0db46100e16b2ca21f5663355f1fd24769e2b3e7100a2a245713894c92381 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:27.677 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nJc 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nJc 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nJc 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4065714 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4065714 ']' 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.935 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tLG 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rre ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rre 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lLx 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3tF ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3tF 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vqY 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.RlZ ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RlZ 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Lxl 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.HPg ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.HPg 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nJc 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:28.195 21:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:28.195 21:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:28.196 21:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:29.128 Waiting for block devices as requested 00:24:29.386 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:29.386 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.643 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.643 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.643 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:29.643 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:29.901 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:29.901 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:29.901 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:29.901 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:30.159 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:30.159 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:30.159 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.159 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.416 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.416 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.416 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:31.001 No valid GPT data, bailing 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:31.001 00:24:31.001 Discovery Log Number of Records 2, Generation counter 2 00:24:31.001 =====Discovery Log Entry 0====== 00:24:31.001 trtype: tcp 00:24:31.001 adrfam: ipv4 00:24:31.001 subtype: current discovery subsystem 00:24:31.001 treq: not specified, sq flow control disable supported 00:24:31.001 portid: 1 00:24:31.001 trsvcid: 4420 00:24:31.001 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:31.001 traddr: 10.0.0.1 00:24:31.001 eflags: none 00:24:31.001 sectype: none 00:24:31.001 =====Discovery Log Entry 1====== 00:24:31.001 trtype: tcp 00:24:31.001 adrfam: ipv4 00:24:31.001 subtype: nvme subsystem 00:24:31.001 treq: not specified, sq flow control disable supported 00:24:31.001 portid: 1 00:24:31.001 trsvcid: 4420 00:24:31.001 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:31.001 traddr: 10.0.0.1 00:24:31.001 eflags: none 00:24:31.001 sectype: none 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:31.001 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 nvme0n1 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.258 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.515 nvme0n1 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:31.515 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.516 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.773 nvme0n1 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.773 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.031 nvme0n1 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.031 21:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.289 nvme0n1 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.289 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.547 nvme0n1 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.547 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.805 nvme0n1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.805 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.062 nvme0n1 00:24:33.062 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.062 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.062 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.062 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.062 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.063 21:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.320 nvme0n1 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.320 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.578 nvme0n1 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.578 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.836 nvme0n1 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.836 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.093 nvme0n1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.093 21:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.351 nvme0n1 00:24:34.351 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.351 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.351 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.351 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.351 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.608 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.866 nvme0n1 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.866 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.867 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 nvme0n1 00:24:35.125 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.125 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.125 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.125 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.125 21:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.125 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.690 nvme0n1 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:35.690 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.691 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.256 nvme0n1 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.256 21:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.256 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.822 nvme0n1 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.822 21:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.426 nvme0n1 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.426 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.427 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.004 nvme0n1 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.004 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.005 21:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.570 nvme0n1 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.570 21:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.505 nvme0n1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.505 21:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 nvme0n1 00:24:40.439 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.439 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.439 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:40.697 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.698 21:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 nvme0n1 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.632 21:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.565 nvme0n1 00:24:42.565 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.565 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.566 21:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.499 nvme0n1 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.499 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:43.757 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.758 nvme0n1 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.758 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 nvme0n1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.017 21:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.290 nvme0n1 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.290 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.549 nvme0n1 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.549 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.808 nvme0n1 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.808 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.809 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.067 nvme0n1 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.067 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.068 21:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.326 nvme0n1 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:45.326 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.327 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.585 nvme0n1 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.585 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.586 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.844 nvme0n1 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.844 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.845 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.845 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.845 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.845 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.845 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.103 nvme0n1 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.103 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.104 21:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.361 nvme0n1 00:24:46.361 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.361 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.361 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.361 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.362 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.620 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.620 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.620 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.878 nvme0n1 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.878 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.879 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.879 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.879 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.879 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.879 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 nvme0n1 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.137 21:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.137 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.704 nvme0n1 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.704 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.963 nvme0n1 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.963 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.964 21:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.529 nvme0n1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.529 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.097 nvme0n1 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.097 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.098 21:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.098 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 nvme0n1 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.921 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.922 21:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.488 nvme0n1 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.488 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.489 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.055 nvme0n1 00:24:51.055 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.055 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.056 21:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.991 nvme0n1 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.991 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.992 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.992 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.992 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.992 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.992 21:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 nvme0n1 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.925 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.926 21:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.301 nvme0n1 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.302 21:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.237 nvme0n1 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.237 21:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.173 nvme0n1 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.173 21:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.174 nvme0n1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.174 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 nvme0n1 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.432 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.690 nvme0n1 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.690 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.691 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.949 nvme0n1 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.949 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.950 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.950 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.208 nvme0n1 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.208 21:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.208 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.209 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.467 nvme0n1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.467 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 nvme0n1 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.725 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.726 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.984 nvme0n1 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.984 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.243 nvme0n1 00:24:58.243 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.243 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.243 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.243 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.243 21:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.243 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 nvme0n1 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.501 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.759 nvme0n1 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.759 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.017 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.017 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.017 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.275 nvme0n1 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.275 21:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.275 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.276 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.534 nvme0n1 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.534 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.793 nvme0n1 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.793 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.051 21:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 nvme0n1 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.310 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.876 nvme0n1 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.876 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.877 21:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.443 nvme0n1 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.443 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.701 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.266 nvme0n1 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.266 21:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.266 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.266 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.266 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.267 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.833 nvme0n1 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.833 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.834 21:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.400 nvme0n1 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.400 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM5YzE5NjlkMGIwNmMwYmEzNzAxMjk0MWYxNWM4OGZS474D: 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ2ZmZjMWUxZWQ3YzMwNDhkZWRiZDVkNTMwYzliYTc3YjJkMzBjOTBkODhiMTcxMWU1MjY2MDVlMTk4M2Y1YYmjEBk=: 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.401 21:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.775 nvme0n1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.775 21:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 nvme0n1 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.709 21:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.644 nvme0n1 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFlZTVkZTQyMzI0ODM2MjllODZmOTRkZGFmOWZkZDJmYTFhMzEwOWE1MjkzNWY5QgQkkA==: 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2RiMjA2OTZjYTZlMjI4OTAzZWM1YWI4ZjgzZDY3ZWG6MnMn: 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.644 21:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 nvme0n1 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDFmMGRiNDYxMDBlMTZiMmNhMjFmNTY2MzM1NWYxZmQyNDc2OWUyYjNlNzEwMGEyYTI0NTcxMzg5NGM5MjM4Mbkv6fs=: 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.648 21:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.582 nvme0n1 00:25:08.582 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.582 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.582 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.582 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.582 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.840 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.841 request: 00:25:08.841 { 00:25:08.841 "name": "nvme0", 00:25:08.841 "trtype": "tcp", 00:25:08.841 "traddr": "10.0.0.1", 00:25:08.841 "adrfam": "ipv4", 00:25:08.841 "trsvcid": "4420", 00:25:08.841 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:08.841 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:08.841 "prchk_reftag": false, 00:25:08.841 "prchk_guard": false, 00:25:08.841 "hdgst": false, 00:25:08.841 "ddgst": false, 00:25:08.841 "allow_unrecognized_csi": false, 00:25:08.841 "method": "bdev_nvme_attach_controller", 00:25:08.841 "req_id": 1 00:25:08.841 } 00:25:08.841 Got JSON-RPC error response 00:25:08.841 response: 00:25:08.841 { 00:25:08.841 "code": -5, 00:25:08.841 "message": "Input/output error" 00:25:08.841 } 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.841 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.100 request: 00:25:09.100 { 00:25:09.100 "name": "nvme0", 00:25:09.100 "trtype": "tcp", 00:25:09.100 "traddr": "10.0.0.1", 00:25:09.100 "adrfam": "ipv4", 00:25:09.100 "trsvcid": "4420", 00:25:09.100 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:09.100 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:09.100 "prchk_reftag": false, 00:25:09.100 "prchk_guard": false, 00:25:09.100 "hdgst": false, 00:25:09.100 "ddgst": false, 00:25:09.100 "dhchap_key": "key2", 00:25:09.100 "allow_unrecognized_csi": false, 00:25:09.100 "method": "bdev_nvme_attach_controller", 00:25:09.100 "req_id": 1 00:25:09.100 } 00:25:09.100 Got JSON-RPC error response 00:25:09.100 response: 00:25:09.100 { 00:25:09.100 "code": -5, 00:25:09.100 "message": "Input/output error" 00:25:09.100 } 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:09.100 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 request: 00:25:09.101 { 00:25:09.101 "name": "nvme0", 00:25:09.101 "trtype": "tcp", 00:25:09.101 "traddr": "10.0.0.1", 00:25:09.101 "adrfam": "ipv4", 00:25:09.101 "trsvcid": "4420", 00:25:09.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:09.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:09.101 "prchk_reftag": false, 00:25:09.101 "prchk_guard": false, 00:25:09.101 "hdgst": false, 00:25:09.101 "ddgst": false, 00:25:09.101 "dhchap_key": "key1", 00:25:09.101 "dhchap_ctrlr_key": "ckey2", 00:25:09.101 "allow_unrecognized_csi": false, 00:25:09.101 "method": "bdev_nvme_attach_controller", 00:25:09.101 "req_id": 1 00:25:09.101 } 00:25:09.101 Got JSON-RPC error response 00:25:09.101 response: 00:25:09.101 { 00:25:09.101 "code": -5, 00:25:09.101 "message": "Input/output error" 00:25:09.101 } 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.101 21:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.359 nvme0n1 00:25:09.359 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.359 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:09.359 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.359 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.360 request: 00:25:09.360 { 00:25:09.360 "name": "nvme0", 00:25:09.360 "dhchap_key": "key1", 00:25:09.360 "dhchap_ctrlr_key": "ckey2", 00:25:09.360 "method": "bdev_nvme_set_keys", 00:25:09.360 "req_id": 1 00:25:09.360 } 00:25:09.360 Got JSON-RPC error response 00:25:09.360 response: 00:25:09.360 { 00:25:09.360 "code": -13, 00:25:09.360 "message": "Permission denied" 00:25:09.360 } 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:09.360 21:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:10.735 21:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.668 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAxZWM2ZWVjZDVkZDU0Y2RkMjZhY2E1YzFkMzkzYTYyNTZhMDA1ZDljNDQ1YTVhHT6KDw==: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2JhYTkzZjcyZDEzNDMyOTcyNjI2YzE4YTIwNWY3YzYzNDlmZGQ3NjJkZDc3ZDA1y4ttPQ==: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.669 nvme0n1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzJhZGNiODlhYjBiYTFlN2EwNzBmMmYyMGJjZWY2YmRufAPw: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: ]] 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U0NTdlZDlkZWJlMzQzYzllZWQxZGZlZTc4ZmZhZjb4eL33: 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.669 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.927 request: 00:25:11.927 { 00:25:11.927 "name": "nvme0", 00:25:11.927 "dhchap_key": "key2", 00:25:11.927 "dhchap_ctrlr_key": "ckey1", 00:25:11.927 "method": "bdev_nvme_set_keys", 00:25:11.927 "req_id": 1 00:25:11.927 } 00:25:11.927 Got JSON-RPC error response 00:25:11.927 response: 00:25:11.927 { 00:25:11.927 "code": -13, 00:25:11.927 "message": "Permission denied" 00:25:11.927 } 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:11.927 21:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.862 rmmod nvme_tcp 00:25:12.862 rmmod nvme_fabrics 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4065714 ']' 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4065714 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4065714 ']' 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4065714 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065714 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065714' 00:25:12.862 killing process with pid 4065714 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4065714 00:25:12.862 21:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4065714 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.121 21:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:15.655 21:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.588 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:16.588 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:16.588 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:17.522 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:17.522 21:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tLG /tmp/spdk.key-null.lLx /tmp/spdk.key-sha256.vqY /tmp/spdk.key-sha384.Lxl /tmp/spdk.key-sha512.nJc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:17.522 21:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:18.897 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:18.897 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:18.897 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:18.897 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:18.897 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:18.897 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:18.897 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:18.897 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:18.897 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:18.897 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:18.897 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:18.897 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:18.897 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:18.897 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:18.897 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:18.897 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:18.897 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:18.897 00:25:18.897 real 0m54.156s 00:25:18.897 user 0m51.951s 00:25:18.897 sys 0m6.069s 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.897 ************************************ 00:25:18.897 END TEST nvmf_auth_host 00:25:18.897 ************************************ 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.897 ************************************ 00:25:18.897 START TEST nvmf_digest 00:25:18.897 ************************************ 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:18.897 * Looking for test storage... 00:25:18.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:18.897 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.157 --rc genhtml_branch_coverage=1 00:25:19.157 --rc genhtml_function_coverage=1 00:25:19.157 --rc genhtml_legend=1 00:25:19.157 --rc geninfo_all_blocks=1 00:25:19.157 --rc geninfo_unexecuted_blocks=1 00:25:19.157 00:25:19.157 ' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.157 --rc genhtml_branch_coverage=1 00:25:19.157 --rc genhtml_function_coverage=1 00:25:19.157 --rc genhtml_legend=1 00:25:19.157 --rc geninfo_all_blocks=1 00:25:19.157 --rc geninfo_unexecuted_blocks=1 00:25:19.157 00:25:19.157 ' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.157 --rc genhtml_branch_coverage=1 00:25:19.157 --rc genhtml_function_coverage=1 00:25:19.157 --rc genhtml_legend=1 00:25:19.157 --rc geninfo_all_blocks=1 00:25:19.157 --rc geninfo_unexecuted_blocks=1 00:25:19.157 00:25:19.157 ' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:19.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.157 --rc genhtml_branch_coverage=1 00:25:19.157 --rc genhtml_function_coverage=1 00:25:19.157 --rc genhtml_legend=1 00:25:19.157 --rc geninfo_all_blocks=1 00:25:19.157 --rc geninfo_unexecuted_blocks=1 00:25:19.157 00:25:19.157 ' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:19.157 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.158 21:06:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.062 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.062 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.063 21:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:25:21.322 00:25:21.322 --- 10.0.0.2 ping statistics --- 00:25:21.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.322 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:21.322 00:25:21.322 --- 10.0.0.1 ping statistics --- 00:25:21.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.322 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:21.322 ************************************ 00:25:21.322 START TEST nvmf_digest_clean 00:25:21.322 ************************************ 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4076329 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4076329 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4076329 ']' 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.322 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.322 [2024-11-26 21:06:12.138262] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:21.322 [2024-11-26 21:06:12.138350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.322 [2024-11-26 21:06:12.211472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.581 [2024-11-26 21:06:12.269286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.581 [2024-11-26 21:06:12.269338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.581 [2024-11-26 21:06:12.269366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.581 [2024-11-26 21:06:12.269377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.581 [2024-11-26 21:06:12.269386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.581 [2024-11-26 21:06:12.270021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:21.581 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:21.582 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:21.582 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.582 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.840 null0 00:25:21.840 [2024-11-26 21:06:12.523938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.840 [2024-11-26 21:06:12.548242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.840 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.840 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:21.840 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4076362 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4076362 /var/tmp/bperf.sock 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4076362 ']' 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.841 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.841 [2024-11-26 21:06:12.602781] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:21.841 [2024-11-26 21:06:12.602855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076362 ] 00:25:21.841 [2024-11-26 21:06:12.678903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.841 [2024-11-26 21:06:12.741648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.099 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.099 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:22.099 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:22.099 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:22.099 21:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:22.358 21:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.358 21:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.923 nvme0n1 00:25:22.923 21:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:22.923 21:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.923 Running I/O for 2 seconds... 00:25:25.230 17365.00 IOPS, 67.83 MiB/s [2024-11-26T20:06:16.168Z] 17742.50 IOPS, 69.31 MiB/s 00:25:25.230 Latency(us) 00:25:25.230 [2024-11-26T20:06:16.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:25.230 nvme0n1 : 2.05 17411.43 68.01 0.00 0.00 7200.27 3349.62 45244.11 00:25:25.230 [2024-11-26T20:06:16.168Z] =================================================================================================================== 00:25:25.230 [2024-11-26T20:06:16.168Z] Total : 17411.43 68.01 0.00 0.00 7200.27 3349.62 45244.11 00:25:25.230 { 00:25:25.230 "results": [ 00:25:25.230 { 00:25:25.230 "job": "nvme0n1", 00:25:25.230 "core_mask": "0x2", 00:25:25.230 "workload": "randread", 00:25:25.230 "status": "finished", 00:25:25.230 "queue_depth": 128, 00:25:25.230 "io_size": 4096, 00:25:25.230 "runtime": 2.04538, 00:25:25.230 "iops": 17411.43455005916, 00:25:25.230 "mibps": 68.01341621116859, 00:25:25.230 "io_failed": 0, 00:25:25.230 "io_timeout": 0, 00:25:25.230 "avg_latency_us": 7200.266291106764, 00:25:25.230 "min_latency_us": 3349.617777777778, 00:25:25.230 "max_latency_us": 45244.112592592595 00:25:25.230 } 00:25:25.230 ], 00:25:25.230 "core_count": 1 00:25:25.230 } 00:25:25.230 21:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:25.230 21:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:25.230 21:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:25.230 21:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:25.230 21:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:25.230 | select(.opcode=="crc32c") 00:25:25.230 | "\(.module_name) \(.executed)"' 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4076362 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4076362 ']' 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4076362 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076362 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076362' 00:25:25.231 killing process with pid 4076362 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4076362 00:25:25.231 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.231 00:25:25.231 Latency(us) 00:25:25.231 [2024-11-26T20:06:16.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.231 [2024-11-26T20:06:16.169Z] =================================================================================================================== 00:25:25.231 [2024-11-26T20:06:16.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.231 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4076362 00:25:25.488 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:25.488 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:25.488 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:25.488 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:25.488 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4076805 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4076805 /var/tmp/bperf.sock 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4076805 ']' 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.489 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.489 [2024-11-26 21:06:16.418702] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:25.489 [2024-11-26 21:06:16.418804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076805 ] 00:25:25.489 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.489 Zero copy mechanism will not be used. 00:25:25.747 [2024-11-26 21:06:16.493429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.747 [2024-11-26 21:06:16.555789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.747 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.747 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:25.747 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:25.747 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:25.747 21:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:26.314 21:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.314 21:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.572 nvme0n1 00:25:26.572 21:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:26.572 21:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:26.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:26.831 Zero copy mechanism will not be used. 00:25:26.831 Running I/O for 2 seconds... 00:25:28.699 4644.00 IOPS, 580.50 MiB/s [2024-11-26T20:06:19.637Z] 4562.00 IOPS, 570.25 MiB/s 00:25:28.699 Latency(us) 00:25:28.699 [2024-11-26T20:06:19.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.699 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:28.699 nvme0n1 : 2.00 4561.94 570.24 0.00 0.00 3502.71 1001.24 9514.86 00:25:28.699 [2024-11-26T20:06:19.637Z] =================================================================================================================== 00:25:28.699 [2024-11-26T20:06:19.637Z] Total : 4561.94 570.24 0.00 0.00 3502.71 1001.24 9514.86 00:25:28.699 { 00:25:28.699 "results": [ 00:25:28.699 { 00:25:28.699 "job": "nvme0n1", 00:25:28.699 "core_mask": "0x2", 00:25:28.699 "workload": "randread", 00:25:28.699 "status": "finished", 00:25:28.699 "queue_depth": 16, 00:25:28.699 "io_size": 131072, 00:25:28.699 "runtime": 2.003535, 00:25:28.699 "iops": 4561.936776747099, 00:25:28.699 "mibps": 570.2420970933874, 00:25:28.699 "io_failed": 0, 00:25:28.699 "io_timeout": 0, 00:25:28.699 "avg_latency_us": 3502.707728016858, 00:25:28.699 "min_latency_us": 1001.2444444444444, 00:25:28.699 "max_latency_us": 9514.856296296297 00:25:28.699 } 00:25:28.699 ], 00:25:28.699 "core_count": 1 00:25:28.699 } 00:25:28.699 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:28.699 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:28.699 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:28.699 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:28.699 | select(.opcode=="crc32c") 00:25:28.699 | "\(.module_name) \(.executed)"' 00:25:28.699 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4076805 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4076805 ']' 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4076805 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076805 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076805' 00:25:28.958 killing process with pid 4076805 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4076805 00:25:28.958 Received shutdown signal, test time was about 2.000000 seconds 00:25:28.958 00:25:28.958 Latency(us) 00:25:28.958 [2024-11-26T20:06:19.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.958 [2024-11-26T20:06:19.896Z] =================================================================================================================== 00:25:28.958 [2024-11-26T20:06:19.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.958 21:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4076805 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4077290 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4077290 /var/tmp/bperf.sock 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4077290 ']' 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.217 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.218 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.218 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.218 [2024-11-26 21:06:20.123861] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:29.218 [2024-11-26 21:06:20.123936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077290 ] 00:25:29.475 [2024-11-26 21:06:20.193915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.475 [2024-11-26 21:06:20.253431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.475 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.476 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:29.476 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:29.476 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:29.476 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:30.040 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.040 21:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.297 nvme0n1 00:25:30.297 21:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:30.297 21:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:30.554 Running I/O for 2 seconds... 00:25:32.417 19914.00 IOPS, 77.79 MiB/s [2024-11-26T20:06:23.355Z] 20068.50 IOPS, 78.39 MiB/s 00:25:32.417 Latency(us) 00:25:32.417 [2024-11-26T20:06:23.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.417 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:32.417 nvme0n1 : 2.00 20085.73 78.46 0.00 0.00 6364.43 3470.98 18447.17 00:25:32.417 [2024-11-26T20:06:23.355Z] =================================================================================================================== 00:25:32.417 [2024-11-26T20:06:23.355Z] Total : 20085.73 78.46 0.00 0.00 6364.43 3470.98 18447.17 00:25:32.417 { 00:25:32.417 "results": [ 00:25:32.417 { 00:25:32.417 "job": "nvme0n1", 00:25:32.417 "core_mask": "0x2", 00:25:32.417 "workload": "randwrite", 00:25:32.417 "status": "finished", 00:25:32.417 "queue_depth": 128, 00:25:32.417 "io_size": 4096, 00:25:32.417 "runtime": 2.004657, 00:25:32.417 "iops": 20085.73037681758, 00:25:32.417 "mibps": 78.45988428444367, 00:25:32.417 "io_failed": 0, 00:25:32.417 "io_timeout": 0, 00:25:32.417 "avg_latency_us": 6364.431114091367, 00:25:32.417 "min_latency_us": 3470.9807407407407, 00:25:32.417 "max_latency_us": 18447.17037037037 00:25:32.417 } 00:25:32.417 ], 00:25:32.417 "core_count": 1 00:25:32.417 } 00:25:32.417 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:32.417 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:32.417 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:32.417 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:32.417 | select(.opcode=="crc32c") 00:25:32.417 | "\(.module_name) \(.executed)"' 00:25:32.417 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4077290 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4077290 ']' 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4077290 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.674 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077290 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077290' 00:25:32.933 killing process with pid 4077290 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4077290 00:25:32.933 Received shutdown signal, test time was about 2.000000 seconds 00:25:32.933 00:25:32.933 Latency(us) 00:25:32.933 [2024-11-26T20:06:23.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.933 [2024-11-26T20:06:23.871Z] =================================================================================================================== 00:25:32.933 [2024-11-26T20:06:23.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4077290 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4077705 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4077705 /var/tmp/bperf.sock 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4077705 ']' 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.933 21:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 [2024-11-26 21:06:23.892611] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:33.192 [2024-11-26 21:06:23.892703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077705 ] 00:25:33.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:33.193 Zero copy mechanism will not be used. 00:25:33.193 [2024-11-26 21:06:23.958373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.193 [2024-11-26 21:06:24.016172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.193 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.193 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:33.451 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:33.451 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:33.451 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:33.709 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.709 21:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.277 nvme0n1 00:25:34.277 21:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:34.277 21:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:34.277 Zero copy mechanism will not be used. 00:25:34.277 Running I/O for 2 seconds... 00:25:36.588 4448.00 IOPS, 556.00 MiB/s [2024-11-26T20:06:27.526Z] 5080.50 IOPS, 635.06 MiB/s 00:25:36.588 Latency(us) 00:25:36.588 [2024-11-26T20:06:27.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.588 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:36.588 nvme0n1 : 2.00 5079.72 634.96 0.00 0.00 3142.45 2184.53 8204.14 00:25:36.588 [2024-11-26T20:06:27.526Z] =================================================================================================================== 00:25:36.588 [2024-11-26T20:06:27.526Z] Total : 5079.72 634.96 0.00 0.00 3142.45 2184.53 8204.14 00:25:36.588 { 00:25:36.588 "results": [ 00:25:36.588 { 00:25:36.588 "job": "nvme0n1", 00:25:36.588 "core_mask": "0x2", 00:25:36.588 "workload": "randwrite", 00:25:36.588 "status": "finished", 00:25:36.588 "queue_depth": 16, 00:25:36.588 "io_size": 131072, 00:25:36.588 "runtime": 2.003458, 00:25:36.588 "iops": 5079.717169014773, 00:25:36.588 "mibps": 634.9646461268467, 00:25:36.588 "io_failed": 0, 00:25:36.588 "io_timeout": 0, 00:25:36.588 "avg_latency_us": 3142.4516650835762, 00:25:36.588 "min_latency_us": 2184.5333333333333, 00:25:36.588 "max_latency_us": 8204.136296296296 00:25:36.588 } 00:25:36.588 ], 00:25:36.588 "core_count": 1 00:25:36.588 } 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:36.588 | select(.opcode=="crc32c") 00:25:36.588 | "\(.module_name) \(.executed)"' 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4077705 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4077705 ']' 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4077705 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077705 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077705' 00:25:36.588 killing process with pid 4077705 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4077705 00:25:36.588 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.588 00:25:36.588 Latency(us) 00:25:36.588 [2024-11-26T20:06:27.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.588 [2024-11-26T20:06:27.526Z] =================================================================================================================== 00:25:36.588 [2024-11-26T20:06:27.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.588 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4077705 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4076329 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4076329 ']' 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4076329 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076329 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076329' 00:25:36.846 killing process with pid 4076329 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4076329 00:25:36.846 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4076329 00:25:37.106 00:25:37.106 real 0m15.906s 00:25:37.106 user 0m32.234s 00:25:37.106 sys 0m4.134s 00:25:37.106 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.106 21:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:37.106 ************************************ 00:25:37.106 END TEST nvmf_digest_clean 00:25:37.106 ************************************ 00:25:37.106 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:37.106 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.106 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.106 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:37.365 ************************************ 00:25:37.365 START TEST nvmf_digest_error 00:25:37.365 ************************************ 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4078258 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:37.365 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4078258 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4078258 ']' 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.366 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.366 [2024-11-26 21:06:28.101741] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:37.366 [2024-11-26 21:06:28.101828] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.366 [2024-11-26 21:06:28.178682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.366 [2024-11-26 21:06:28.238683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.366 [2024-11-26 21:06:28.238766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.366 [2024-11-26 21:06:28.238784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.366 [2024-11-26 21:06:28.238798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.366 [2024-11-26 21:06:28.238811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.366 [2024-11-26 21:06:28.239464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.625 [2024-11-26 21:06:28.356227] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.625 null0 00:25:37.625 [2024-11-26 21:06:28.483540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.625 [2024-11-26 21:06:28.507836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4078282 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4078282 /var/tmp/bperf.sock 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4078282 ']' 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.625 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.885 [2024-11-26 21:06:28.564273] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:37.885 [2024-11-26 21:06:28.564350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078282 ] 00:25:37.885 [2024-11-26 21:06:28.638089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.885 [2024-11-26 21:06:28.698543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.885 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.885 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:37.885 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:37.885 21:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.450 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.707 nvme0n1 00:25:38.707 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:38.707 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.707 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.966 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.966 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:38.966 21:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.966 Running I/O for 2 seconds... 00:25:38.966 [2024-11-26 21:06:29.795210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.795276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.809772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.809825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.822861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.822894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.834093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.834123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.849302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.849335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.849353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.861902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.861933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.861965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.875651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.875682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.875724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.966 [2024-11-26 21:06:29.889204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:38.966 [2024-11-26 21:06:29.889243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.966 [2024-11-26 21:06:29.889261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.903815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.903853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.914596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.914626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.914657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.928068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.928115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.942079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.942109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.942141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.955327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.955359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.969851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.969884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.980928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.980959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.980990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:29.996539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:29.996576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:29.996597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.225 [2024-11-26 21:06:30.014719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.225 [2024-11-26 21:06:30.014783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.225 [2024-11-26 21:06:30.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.026269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.026305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.026326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.040817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.040857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.040876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.055150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.055189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.055209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.070460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.070497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.070518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.085455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.085491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.085512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.100889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.100921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.116332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.116366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.116386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.129699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.129748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.129775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.146035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.146066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.146082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.226 [2024-11-26 21:06:30.162433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.226 [2024-11-26 21:06:30.162468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.226 [2024-11-26 21:06:30.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.174425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.174464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.174488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.191090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.191127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.191147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.204540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.204574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.204594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.220446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.220481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.220500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.236867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.236916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.253197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.253232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.253251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.265555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.265596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.265616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.279525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.279558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.279577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.294254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.534 [2024-11-26 21:06:30.294287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.534 [2024-11-26 21:06:30.294306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.534 [2024-11-26 21:06:30.310305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.310339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.310358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.325428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.325464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.325484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.339212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.339247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.353886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.353916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.353948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.372090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.372126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.372145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.388524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.388560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.388580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.406431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.406466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.406486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.535 [2024-11-26 21:06:30.423556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.535 [2024-11-26 21:06:30.423596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.535 [2024-11-26 21:06:30.423617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.823 [2024-11-26 21:06:30.437350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.823 [2024-11-26 21:06:30.437391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.823 [2024-11-26 21:06:30.437413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.823 [2024-11-26 21:06:30.454321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.823 [2024-11-26 21:06:30.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.823 [2024-11-26 21:06:30.454377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.823 [2024-11-26 21:06:30.471518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.823 [2024-11-26 21:06:30.471553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.823 [2024-11-26 21:06:30.471573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.490996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.491043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.491064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.505907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.505939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.505957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.519063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.519099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.519119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.533143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.533179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.533210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.549320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.549357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.565425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.565461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.565482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.577486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.577522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.577541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.594501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.594549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.594569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.607241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.607277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.607297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.622672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.622736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.622756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.639316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.639352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.639372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.654219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.654254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.654274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.667611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.667674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.681361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.681397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.681417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.695741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.695770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.695801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.709753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.709797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.709813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.724073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.724109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.724128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.738240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.738275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.738295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.824 [2024-11-26 21:06:30.753124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:39.824 [2024-11-26 21:06:30.753158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.824 [2024-11-26 21:06:30.753177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.768186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.768222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.768242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 17237.00 IOPS, 67.33 MiB/s [2024-11-26T20:06:31.021Z] [2024-11-26 21:06:30.780875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.780906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.780944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.797415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.797451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.797471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.812878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.812909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.812927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.829217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.829254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.829274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.841951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.841995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.842011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.857901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.857945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.857963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.873418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.873454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.873473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.886089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.886125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.886145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.904185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.904221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.904242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.083 [2024-11-26 21:06:30.918232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.083 [2024-11-26 21:06:30.918274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.083 [2024-11-26 21:06:30.918294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:30.930119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:30.930154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:30.930173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:30.946989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:30.947038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:30.947058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:30.959882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:30.959915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:30.959935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:30.976335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:30.976371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:30.976391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:30.993281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:30.993318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:30.993338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:31.005520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:31.005556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:31.005576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.084 [2024-11-26 21:06:31.020096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.084 [2024-11-26 21:06:31.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.084 [2024-11-26 21:06:31.020158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.342 [2024-11-26 21:06:31.033904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.342 [2024-11-26 21:06:31.033936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.342 [2024-11-26 21:06:31.033954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.342 [2024-11-26 21:06:31.047935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.342 [2024-11-26 21:06:31.047966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.342 [2024-11-26 21:06:31.047984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.342 [2024-11-26 21:06:31.059194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.342 [2024-11-26 21:06:31.059223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.342 [2024-11-26 21:06:31.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.342 [2024-11-26 21:06:31.076499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.342 [2024-11-26 21:06:31.076542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.076559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.088160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.088190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.088208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.102312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.102357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.102374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.117447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.117492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.117510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.129318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.129348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.129380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.142011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.142043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.142061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.156286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.156321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.156360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.171551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.171603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.171620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.182978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.183042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.198510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.198555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.198571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.209865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.209895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.209927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.225154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.225186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.225204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.240042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.240088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.240105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.251520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.251559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.343 [2024-11-26 21:06:31.266365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.343 [2024-11-26 21:06:31.266395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.343 [2024-11-26 21:06:31.266425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.280390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.280433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.280452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.291594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.291625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.308464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.308494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.308528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.323193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.323225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.323243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.334151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.334198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.334216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.350620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.350652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.350670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.362235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.362264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.362295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.378125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.378156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.392896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.392926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.392959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.405645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.405678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.405704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.417168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.417213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.430838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.430868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.443202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.443232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.443265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.456153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.456184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.456201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.469070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.469099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.469131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.483190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.483220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.483238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.496223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.496254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.496271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.509113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.509143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.509166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.603 [2024-11-26 21:06:31.521907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.603 [2024-11-26 21:06:31.521939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.603 [2024-11-26 21:06:31.521957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.604 [2024-11-26 21:06:31.534845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.604 [2024-11-26 21:06:31.534877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.604 [2024-11-26 21:06:31.534895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.862 [2024-11-26 21:06:31.549507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.862 [2024-11-26 21:06:31.549540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.862 [2024-11-26 21:06:31.549558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.862 [2024-11-26 21:06:31.561496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.862 [2024-11-26 21:06:31.561527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.862 [2024-11-26 21:06:31.561560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.862 [2024-11-26 21:06:31.577024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.862 [2024-11-26 21:06:31.577056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.862 [2024-11-26 21:06:31.577074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.592585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.592618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.592635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.604039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.604070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.604104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.619823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.619861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.619895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.635188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.635228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.635248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.649920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.649952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.649970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.661565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.661610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.661627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.678794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.678824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.678855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.689670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.689737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.689757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.704936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.704968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.704985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.716118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.716178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.732028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.732060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.732094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.747724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.747780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.747804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.759536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.759566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.759584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.863 [2024-11-26 21:06:31.773768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x711880) 00:25:40.863 [2024-11-26 21:06:31.773799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.863 [2024-11-26 21:06:31.773816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.122 17759.00 IOPS, 69.37 MiB/s 00:25:41.122 Latency(us) 00:25:41.122 [2024-11-26T20:06:32.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.122 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:41.122 nvme0n1 : 2.05 17429.46 68.08 0.00 0.00 7191.17 3713.71 46409.20 00:25:41.122 [2024-11-26T20:06:32.060Z] =================================================================================================================== 00:25:41.122 [2024-11-26T20:06:32.060Z] Total : 17429.46 68.08 0.00 0.00 7191.17 3713.71 46409.20 00:25:41.122 { 00:25:41.122 "results": [ 00:25:41.122 { 00:25:41.122 "job": "nvme0n1", 00:25:41.122 "core_mask": "0x2", 00:25:41.122 "workload": "randread", 00:25:41.122 "status": "finished", 00:25:41.122 "queue_depth": 128, 00:25:41.122 "io_size": 4096, 00:25:41.122 "runtime": 2.045158, 00:25:41.122 "iops": 17429.460217743566, 00:25:41.122 "mibps": 68.0838289755608, 00:25:41.122 "io_failed": 0, 00:25:41.122 "io_timeout": 0, 00:25:41.122 "avg_latency_us": 7191.1726279609575, 00:25:41.122 "min_latency_us": 3713.7066666666665, 00:25:41.122 "max_latency_us": 46409.19703703704 00:25:41.122 } 00:25:41.122 ], 00:25:41.122 "core_count": 1 00:25:41.122 } 00:25:41.122 21:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:41.122 21:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:41.122 21:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:41.122 | .driver_specific 00:25:41.122 | .nvme_error 00:25:41.122 | .status_code 00:25:41.122 | .command_transient_transport_error' 00:25:41.122 21:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4078282 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4078282 ']' 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4078282 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078282 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078282' 00:25:41.380 killing process with pid 4078282 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4078282 00:25:41.380 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.380 00:25:41.380 Latency(us) 00:25:41.380 [2024-11-26T20:06:32.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.380 [2024-11-26T20:06:32.318Z] =================================================================================================================== 00:25:41.380 [2024-11-26T20:06:32.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.380 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4078282 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4078757 00:25:41.638 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4078757 /var/tmp/bperf.sock 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4078757 ']' 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.639 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.639 [2024-11-26 21:06:32.427439] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:41.639 [2024-11-26 21:06:32.427533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078757 ] 00:25:41.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:41.639 Zero copy mechanism will not be used. 00:25:41.639 [2024-11-26 21:06:32.499974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.639 [2024-11-26 21:06:32.561667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.897 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.897 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:41.897 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:41.897 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.155 21:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.413 nvme0n1 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:42.413 21:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:42.672 Zero copy mechanism will not be used. 00:25:42.672 Running I/O for 2 seconds... 00:25:42.672 [2024-11-26 21:06:33.396387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.396456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.396480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.406651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.406708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.406746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.416471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.416509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.416536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.426581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.426619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.426643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.436625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.436663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.436696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.446264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.446312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.446342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.456195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.456233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.456254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.466145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.466182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.466203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.477045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.477083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.477108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.487865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.487900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.487918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.498673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.498736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.498755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.504276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.504313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.504333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.513286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.513324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.513345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.523148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.523210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.533035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.533072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.533099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.543141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.543184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.543204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.553246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.553292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.553313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.562708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.562742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.562762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.572311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.572344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.672 [2024-11-26 21:06:33.572363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.672 [2024-11-26 21:06:33.580748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.672 [2024-11-26 21:06:33.580781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.673 [2024-11-26 21:06:33.580800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.673 [2024-11-26 21:06:33.590268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.673 [2024-11-26 21:06:33.590302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.673 [2024-11-26 21:06:33.590334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.673 [2024-11-26 21:06:33.599487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.673 [2024-11-26 21:06:33.599521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.673 [2024-11-26 21:06:33.599539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.673 [2024-11-26 21:06:33.608861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.673 [2024-11-26 21:06:33.608896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.673 [2024-11-26 21:06:33.608921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.931 [2024-11-26 21:06:33.619149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.619187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.619207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.629139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.629177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.638176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.638212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.638234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.647595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.647632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.647652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.657150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.657187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.657207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.666679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.666740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.666759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.676385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.676421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.676442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.686349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.686407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.696607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.696671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.706723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.706774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.715866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.715914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.715932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.724990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.725039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.725062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.734348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.734385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.734406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.743856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.743904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.743922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.753119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.753157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.753177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.762711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.762761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.762780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.772941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.772990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.773017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.783475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.783511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.783531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.793461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.793498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.793517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.799246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.799283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.799303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.807825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.807855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.807875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.817570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.817607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.932 [2024-11-26 21:06:33.817627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.932 [2024-11-26 21:06:33.827135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.932 [2024-11-26 21:06:33.827173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.933 [2024-11-26 21:06:33.827193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:42.933 [2024-11-26 21:06:33.836149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.933 [2024-11-26 21:06:33.836186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.933 [2024-11-26 21:06:33.836206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:42.933 [2024-11-26 21:06:33.845549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.933 [2024-11-26 21:06:33.845586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.933 [2024-11-26 21:06:33.845607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:42.933 [2024-11-26 21:06:33.854211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.933 [2024-11-26 21:06:33.854254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.933 [2024-11-26 21:06:33.854275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:42.933 [2024-11-26 21:06:33.862948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:42.933 [2024-11-26 21:06:33.862999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.933 [2024-11-26 21:06:33.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.191 [2024-11-26 21:06:33.872811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.191 [2024-11-26 21:06:33.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.191 [2024-11-26 21:06:33.872865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.191 [2024-11-26 21:06:33.882559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.191 [2024-11-26 21:06:33.882596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.191 [2024-11-26 21:06:33.882620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.191 [2024-11-26 21:06:33.892178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.892216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.892238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.901366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.901404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.901425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.911322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.911360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.911380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.921043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.921096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.921118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.930423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.930460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.930480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.939532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.939568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.939588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.948410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.948447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.948468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.957709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.957761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.957780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.966950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.966984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.967020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.976137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.976174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.976194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.985353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.985389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.985410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:33.994849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:33.994887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:33.994908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.004644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.004681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.004712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.014057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.014093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.023823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.023857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.023876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.033740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.033775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.033794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.043649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.043696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.053331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.053368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.053389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.062800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.062834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.062852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.072293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.072330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.072351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.081838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.081871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.081889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.091036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.091074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.091095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.100612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.100675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.110579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.110617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.110638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.192 [2024-11-26 21:06:34.120784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.192 [2024-11-26 21:06:34.120817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.192 [2024-11-26 21:06:34.120835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.130167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.130205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.130225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.139134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.139173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.139193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.148963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.148997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.158913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.158947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.158966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.168464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.168500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.168521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.178310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.178347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.178368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.188139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.188176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.188197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.198057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.198095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.198115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.207911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.207943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.217380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.217416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.217436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.227176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.227234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.236798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.236866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.246678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.246728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.246765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.256614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.256651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.256671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.265958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.266016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.266038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.275257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.275294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.275314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.284422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.284459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.284479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.293768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.293802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.293836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.303260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.303296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.303317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.451 [2024-11-26 21:06:34.312700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.451 [2024-11-26 21:06:34.312753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.451 [2024-11-26 21:06:34.312773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.322417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.322455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.322476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.332221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.341797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.341831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.341849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.351270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.351328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.360773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.360807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.360826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.370172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.370209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.370230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.452 [2024-11-26 21:06:34.379857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.452 [2024-11-26 21:06:34.379905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.452 [2024-11-26 21:06:34.379923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.389367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.389404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.389425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.710 3239.00 IOPS, 404.88 MiB/s [2024-11-26T20:06:34.648Z] [2024-11-26 21:06:34.396037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.396075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.396095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.405727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.405792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.414472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.414510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.414530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.423878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.423950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.433739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.433771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.433789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.443079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.443117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.443137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.452653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.452699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.452737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.463303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.463341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.463362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.473370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.473407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.473428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.482970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.483002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.483036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.492047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.492085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.492106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.501438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.501476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.501497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.510922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.510975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.510993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.520094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.710 [2024-11-26 21:06:34.520132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.710 [2024-11-26 21:06:34.520153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.710 [2024-11-26 21:06:34.529289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.529327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.529347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.538866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.538914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.538932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.547994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.548047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.548067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.557675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.557722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.557743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.566871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.566920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.566938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.576586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.576624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.576645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.586390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.586438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.596165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.596202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.596224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.605845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.605879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.605897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.615524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.615560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.615581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.625085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.625121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.625141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.634247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.634286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.634306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.711 [2024-11-26 21:06:34.643871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.711 [2024-11-26 21:06:34.643905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.711 [2024-11-26 21:06:34.643923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.653493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.653532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.653552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.662939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.662993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.672880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.672918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.672938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.682887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.682921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.682940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.692336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.692372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.692392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.702385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.702423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.702443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.712263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.712300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.712319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.721829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.721871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.721890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.732097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.732134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.732155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.741794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.741846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.751827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.751868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.751887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.757352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.757389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.757409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.767942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.767974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.768008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.778604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.778640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.778661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.789008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.789045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.789065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.799735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.799801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.810055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.810092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.810113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.819924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.819955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.819988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.829624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.829661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.829681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.839525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.839563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.839590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.849456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.849513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.859442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.859479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.859499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.970 [2024-11-26 21:06:34.869628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.970 [2024-11-26 21:06:34.869665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.970 [2024-11-26 21:06:34.869695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.971 [2024-11-26 21:06:34.880231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.971 [2024-11-26 21:06:34.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.971 [2024-11-26 21:06:34.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.971 [2024-11-26 21:06:34.890106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.971 [2024-11-26 21:06:34.890143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.971 [2024-11-26 21:06:34.890164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:43.971 [2024-11-26 21:06:34.900485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:43.971 [2024-11-26 21:06:34.900522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.971 [2024-11-26 21:06:34.900542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.910887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.910922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.910941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.920186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.920222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.920240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.931353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.931398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.931419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.939986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.940021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.949324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.949361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.949381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.959269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.959306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.969487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.969524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.969545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.979081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.979115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.979134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.988163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.988195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.988213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:34.996656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:34.996711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:34.996732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.006034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.006067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.006085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.015136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.015170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.015204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.024723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.024756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.024792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.033708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.033741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.033759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.042926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.042960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.042993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.052283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.052334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.061525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.061556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.061573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.070557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.230 [2024-11-26 21:06:35.070623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.230 [2024-11-26 21:06:35.077185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.230 [2024-11-26 21:06:35.077233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.077251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.084631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.084710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.093907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.093940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.093958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.102723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.102758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.102793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.110618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.110652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.110670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.118822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.118855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.118872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.127429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.127462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.136246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.136297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.136315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.144878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.144912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.144931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.153621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.153655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.153673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.231 [2024-11-26 21:06:35.162248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.231 [2024-11-26 21:06:35.162287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.231 [2024-11-26 21:06:35.162305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.170714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.170766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.175387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.175439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.183941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.183997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.192204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.192236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.192255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.200803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.200860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.200878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.208876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.208909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.208928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.217511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.217543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.217561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.226521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.226571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.490 [2024-11-26 21:06:35.226599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.490 [2024-11-26 21:06:35.235286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.490 [2024-11-26 21:06:35.235320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.235338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.244605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.244638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.244658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.253336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.253369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.253402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.262312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.262344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.262375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.270865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.270913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.270931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.279839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.279903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.288871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.288905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.288923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.297895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.297929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.297961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.306828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.306885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.306905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.315657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.315699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.315719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.324458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.324493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.324511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.333102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.333152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.341919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.350954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.351002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.351021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.359892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.359926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.359944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.368489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.368539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.377169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.377218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.377236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.491 [2024-11-26 21:06:35.386115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.386182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.491 3297.50 IOPS, 412.19 MiB/s [2024-11-26T20:06:35.429Z] [2024-11-26 21:06:35.396303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d0dc0) 00:25:44.491 [2024-11-26 21:06:35.396338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.491 [2024-11-26 21:06:35.396356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.491 00:25:44.491 Latency(us) 00:25:44.491 [2024-11-26T20:06:35.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.491 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:44.491 nvme0n1 : 2.00 3298.66 412.33 0.00 0.00 4844.43 728.18 13981.01 00:25:44.491 [2024-11-26T20:06:35.429Z] =================================================================================================================== 00:25:44.491 [2024-11-26T20:06:35.429Z] Total : 3298.66 412.33 0.00 0.00 4844.43 728.18 13981.01 00:25:44.491 { 00:25:44.491 "results": [ 00:25:44.491 { 00:25:44.491 "job": "nvme0n1", 00:25:44.491 "core_mask": "0x2", 00:25:44.491 "workload": "randread", 00:25:44.491 "status": "finished", 00:25:44.491 "queue_depth": 16, 00:25:44.491 "io_size": 131072, 00:25:44.491 "runtime": 2.004149, 00:25:44.491 "iops": 3298.6569361858824, 00:25:44.491 "mibps": 412.3321170232353, 00:25:44.491 "io_failed": 0, 00:25:44.491 "io_timeout": 0, 00:25:44.491 "avg_latency_us": 4844.433010302695, 00:25:44.491 "min_latency_us": 728.1777777777778, 00:25:44.491 "max_latency_us": 13981.013333333334 00:25:44.491 } 00:25:44.491 ], 00:25:44.491 "core_count": 1 00:25:44.491 } 00:25:44.491 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:44.491 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:44.491 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:44.491 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:44.491 | .driver_specific 00:25:44.491 | .nvme_error 00:25:44.491 | .status_code 00:25:44.491 | .command_transient_transport_error' 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4078757 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4078757 ']' 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4078757 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078757 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078757' 00:25:45.058 killing process with pid 4078757 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4078757 00:25:45.058 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.058 00:25:45.058 Latency(us) 00:25:45.058 [2024-11-26T20:06:35.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.058 [2024-11-26T20:06:35.996Z] =================================================================================================================== 00:25:45.058 [2024-11-26T20:06:35.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4078757 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4079222 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4079222 /var/tmp/bperf.sock 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4079222 ']' 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.058 21:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.058 [2024-11-26 21:06:35.995539] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:45.058 [2024-11-26 21:06:35.995630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079222 ] 00:25:45.317 [2024-11-26 21:06:36.066275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.317 [2024-11-26 21:06:36.127908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.317 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.317 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:45.317 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.317 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.883 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.141 nvme0n1 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:46.141 21:06:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.141 Running I/O for 2 seconds... 00:25:46.141 [2024-11-26 21:06:37.067523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eec408 00:25:46.141 [2024-11-26 21:06:37.068604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.141 [2024-11-26 21:06:37.068648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.079175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef8a50 00:25:46.400 [2024-11-26 21:06:37.080311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.091904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee12d8 00:25:46.400 [2024-11-26 21:06:37.093062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.093093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.104405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeaab8 00:25:46.400 [2024-11-26 21:06:37.105736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.105767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.117103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee5a90 00:25:46.400 [2024-11-26 21:06:37.118606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.118636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.128006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efc560 00:25:46.400 [2024-11-26 21:06:37.128913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.140272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6b70 00:25:46.400 [2024-11-26 21:06:37.141322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.152842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efc128 00:25:46.400 [2024-11-26 21:06:37.154010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.154040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.164032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eee190 00:25:46.400 [2024-11-26 21:06:37.165243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.165274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.176513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:46.400 [2024-11-26 21:06:37.177764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.177794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.188973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee12d8 00:25:46.400 [2024-11-26 21:06:37.190449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.190478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.201486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:46.400 [2024-11-26 21:06:37.203059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.203089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.214001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efe720 00:25:46.400 [2024-11-26 21:06:37.215772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.215802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.226530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee01f8 00:25:46.400 [2024-11-26 21:06:37.228401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.228431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.234951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ede470 00:25:46.400 [2024-11-26 21:06:37.235766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.235802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.247329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6fa8 00:25:46.400 [2024-11-26 21:06:37.248312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.248356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.259813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ede038 00:25:46.400 [2024-11-26 21:06:37.260983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.261013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.272229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6738 00:25:46.400 [2024-11-26 21:06:37.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.273667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.283542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eed920 00:25:46.400 [2024-11-26 21:06:37.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.284866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.296037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef2948 00:25:46.400 [2024-11-26 21:06:37.297525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.297556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.308509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:46.400 [2024-11-26 21:06:37.310133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.310163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.320941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eefae0 00:25:46.400 [2024-11-26 21:06:37.322761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.322793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:46.400 [2024-11-26 21:06:37.332225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee7818 00:25:46.400 [2024-11-26 21:06:37.333645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.400 [2024-11-26 21:06:37.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:46.659 [2024-11-26 21:06:37.343363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0bc0 00:25:46.660 [2024-11-26 21:06:37.344736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.344766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.354638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef3e60 00:25:46.660 [2024-11-26 21:06:37.355572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.355601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.366912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeb760 00:25:46.660 [2024-11-26 21:06:37.367672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.367711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.379366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee38d0 00:25:46.660 [2024-11-26 21:06:37.380249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.380279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.391879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4298 00:25:46.660 [2024-11-26 21:06:37.392973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.393002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.403126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee3d08 00:25:46.660 [2024-11-26 21:06:37.404944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.404973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.414409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efef90 00:25:46.660 [2024-11-26 21:06:37.415342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.426885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee95a0 00:25:46.660 [2024-11-26 21:06:37.427937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.427966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.438150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eebfd0 00:25:46.660 [2024-11-26 21:06:37.439249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.439278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.450670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:46.660 [2024-11-26 21:06:37.451830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.451860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.463197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:46.660 [2024-11-26 21:06:37.464522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.464552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.475779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef2510 00:25:46.660 [2024-11-26 21:06:37.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.477249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.488256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eebfd0 00:25:46.660 [2024-11-26 21:06:37.489831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.489860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.500598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeaab8 00:25:46.660 [2024-11-26 21:06:37.502367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.502396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.513438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee7c50 00:25:46.660 [2024-11-26 21:06:37.515410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.515439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.522276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee5ec8 00:25:46.660 [2024-11-26 21:06:37.523190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.523219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.533795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee88f8 00:25:46.660 [2024-11-26 21:06:37.534636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.534666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.546384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef81e0 00:25:46.660 [2024-11-26 21:06:37.547483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.547519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.558994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eedd58 00:25:46.660 [2024-11-26 21:06:37.560198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.560228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.571554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0bc0 00:25:46.660 [2024-11-26 21:06:37.572927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.572956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:46.660 [2024-11-26 21:06:37.584198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eed4e8 00:25:46.660 [2024-11-26 21:06:37.585654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.660 [2024-11-26 21:06:37.585683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.597266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef81e0 00:25:46.917 [2024-11-26 21:06:37.599006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.599036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.610134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee23b8 00:25:46.917 [2024-11-26 21:06:37.611877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.611907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.622623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee01f8 00:25:46.917 [2024-11-26 21:06:37.624528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.624557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.631104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef6020 00:25:46.917 [2024-11-26 21:06:37.631953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.631982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.643380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee8d30 00:25:46.917 [2024-11-26 21:06:37.644313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.644343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.654568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeb760 00:25:46.917 [2024-11-26 21:06:37.655478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.667141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:46.917 [2024-11-26 21:06:37.668142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.668171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.679577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:46.917 [2024-11-26 21:06:37.680763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.680793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.692026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:46.917 [2024-11-26 21:06:37.693314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.693344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.704619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee38d0 00:25:46.917 [2024-11-26 21:06:37.706065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.706094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.717072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:46.917 [2024-11-26 21:06:37.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.718761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.729549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee1f80 00:25:46.917 [2024-11-26 21:06:37.731306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.731336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.917 [2024-11-26 21:06:37.742069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efd640 00:25:46.917 [2024-11-26 21:06:37.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.917 [2024-11-26 21:06:37.744027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.750458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efc128 00:25:46.918 [2024-11-26 21:06:37.751323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.751352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.762941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee99d8 00:25:46.918 [2024-11-26 21:06:37.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.763979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.775466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6fa8 00:25:46.918 [2024-11-26 21:06:37.776693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.788066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef6890 00:25:46.918 [2024-11-26 21:06:37.789389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.789419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.800529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef6020 00:25:46.918 [2024-11-26 21:06:37.802030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.802062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.811844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee7c50 00:25:46.918 [2024-11-26 21:06:37.813283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.813312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.824488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:46.918 [2024-11-26 21:06:37.826198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.826228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.837104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efd208 00:25:46.918 [2024-11-26 21:06:37.838883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.838913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:46.918 [2024-11-26 21:06:37.849736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eec408 00:25:46.918 [2024-11-26 21:06:37.851719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:46.918 [2024-11-26 21:06:37.851748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.176 [2024-11-26 21:06:37.858219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ede8a8 00:25:47.176 [2024-11-26 21:06:37.859142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.176 [2024-11-26 21:06:37.859177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.176 [2024-11-26 21:06:37.869567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016edf988 00:25:47.176 [2024-11-26 21:06:37.870425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.870455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.882158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee5ec8 00:25:47.177 [2024-11-26 21:06:37.883170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.883199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.894751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee27f0 00:25:47.177 [2024-11-26 21:06:37.895885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.895914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.907220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0ff8 00:25:47.177 [2024-11-26 21:06:37.908597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.908627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.919753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee01f8 00:25:47.177 [2024-11-26 21:06:37.921189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.921219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.932184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee5ec8 00:25:47.177 [2024-11-26 21:06:37.933827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.944553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee12d8 00:25:47.177 [2024-11-26 21:06:37.946376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.946406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.957175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee3060 00:25:47.177 [2024-11-26 21:06:37.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.959139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.965542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efb8b8 00:25:47.177 [2024-11-26 21:06:37.966480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.966510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.976933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee4578 00:25:47.177 [2024-11-26 21:06:37.977806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:37.989411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:47.177 [2024-11-26 21:06:37.990407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:37.990437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.001983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:47.177 [2024-11-26 21:06:38.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.003174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.014408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:47.177 [2024-11-26 21:06:38.015728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.015758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.027606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eedd58 00:25:47.177 [2024-11-26 21:06:38.029186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.029232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.041260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:47.177 [2024-11-26 21:06:38.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.043168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.054900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efd208 00:25:47.177 21134.00 IOPS, 82.55 MiB/s [2024-11-26T20:06:38.115Z] [2024-11-26 21:06:38.056899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.056928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.068430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee0630 00:25:47.177 [2024-11-26 21:06:38.070525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.070559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.077652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef57b0 00:25:47.177 [2024-11-26 21:06:38.078584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.078630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.091274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee95a0 00:25:47.177 [2024-11-26 21:06:38.092384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.092414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.177 [2024-11-26 21:06:38.103658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:47.177 [2024-11-26 21:06:38.104766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.177 [2024-11-26 21:06:38.104797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.117313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:47.437 [2024-11-26 21:06:38.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.118658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.131194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:47.437 [2024-11-26 21:06:38.132597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.132644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.144789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016edf550 00:25:47.437 [2024-11-26 21:06:38.146363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.146408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.158418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:47.437 [2024-11-26 21:06:38.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.160244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.171961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee12d8 00:25:47.437 [2024-11-26 21:06:38.173961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.174012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.185568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efe720 00:25:47.437 [2024-11-26 21:06:38.187675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.187732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.194805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeee38 00:25:47.437 [2024-11-26 21:06:38.195719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.195769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.208461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee99d8 00:25:47.437 [2024-11-26 21:06:38.209580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.209609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.222084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efb480 00:25:47.437 [2024-11-26 21:06:38.223371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.234265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef8e88 00:25:47.437 [2024-11-26 21:06:38.235516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.235544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.247800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0bc0 00:25:47.437 [2024-11-26 21:06:38.249199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.249244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.261430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee49b0 00:25:47.437 [2024-11-26 21:06:38.263047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.263080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.275107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeb760 00:25:47.437 [2024-11-26 21:06:38.276885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.288633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efdeb0 00:25:47.437 [2024-11-26 21:06:38.290540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.290585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.302180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eee5c8 00:25:47.437 [2024-11-26 21:06:38.304289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.304335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.311409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016edf988 00:25:47.437 [2024-11-26 21:06:38.312336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.312381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.323699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6b70 00:25:47.437 [2024-11-26 21:06:38.324623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.324667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.337391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:47.437 [2024-11-26 21:06:38.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.437 [2024-11-26 21:06:38.338522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.437 [2024-11-26 21:06:38.350913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:47.437 [2024-11-26 21:06:38.352179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.438 [2024-11-26 21:06:38.352224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.438 [2024-11-26 21:06:38.364479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:47.438 [2024-11-26 21:06:38.365946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.438 [2024-11-26 21:06:38.365989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.696 [2024-11-26 21:06:38.378074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eec840 00:25:47.696 [2024-11-26 21:06:38.379673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.696 [2024-11-26 21:06:38.379727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.696 [2024-11-26 21:06:38.391708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef4b08 00:25:47.697 [2024-11-26 21:06:38.393465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.393510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.405258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee1b48 00:25:47.697 [2024-11-26 21:06:38.407207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.407236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.418841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef81e0 00:25:47.697 [2024-11-26 21:06:38.420938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.420981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.428016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ede038 00:25:47.697 [2024-11-26 21:06:38.428996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.429024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.441646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee99d8 00:25:47.697 [2024-11-26 21:06:38.442765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.442794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.453964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:47.697 [2024-11-26 21:06:38.455062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.455108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.467484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef0788 00:25:47.697 [2024-11-26 21:06:38.468753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.468781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.481125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1430 00:25:47.697 [2024-11-26 21:06:38.482532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.482565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.494719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef35f0 00:25:47.697 [2024-11-26 21:06:38.496301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.496347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.508425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eecc78 00:25:47.697 [2024-11-26 21:06:38.510231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.510260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.522164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efdeb0 00:25:47.697 [2024-11-26 21:06:38.524134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.524186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.535723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee5220 00:25:47.697 [2024-11-26 21:06:38.537881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.537909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.544868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee2c28 00:25:47.697 [2024-11-26 21:06:38.545822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.545865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.559629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eee190 00:25:47.697 [2024-11-26 21:06:38.561134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.561179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.573094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee12d8 00:25:47.697 [2024-11-26 21:06:38.574926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.574955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.586854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef3e60 00:25:47.697 [2024-11-26 21:06:38.588802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.588830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.600389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef7100 00:25:47.697 [2024-11-26 21:06:38.602511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.602558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.609641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eed920 00:25:47.697 [2024-11-26 21:06:38.610539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.610584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.697 [2024-11-26 21:06:38.621895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef6458 00:25:47.697 [2024-11-26 21:06:38.622815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.697 [2024-11-26 21:06:38.622844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.636440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efb8b8 00:25:47.981 [2024-11-26 21:06:38.637592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.637628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.649879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eed4e8 00:25:47.981 [2024-11-26 21:06:38.651123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.651170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.662152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee0a68 00:25:47.981 [2024-11-26 21:06:38.663396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.663424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.676079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee0630 00:25:47.981 [2024-11-26 21:06:38.677516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.677561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.689912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eea248 00:25:47.981 [2024-11-26 21:06:38.691536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.691564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.703540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ede470 00:25:47.981 [2024-11-26 21:06:38.705384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.705413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.717385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eefae0 00:25:47.981 [2024-11-26 21:06:38.719349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.719394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.731095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef57b0 00:25:47.981 [2024-11-26 21:06:38.733193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.733239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.740359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eec840 00:25:47.981 [2024-11-26 21:06:38.741307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.741353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.753067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee8088 00:25:47.981 [2024-11-26 21:06:38.754000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.754044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.767721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef7da8 00:25:47.981 [2024-11-26 21:06:38.768904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.779839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee7c50 00:25:47.981 [2024-11-26 21:06:38.780944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.780987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.793401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef1ca0 00:25:47.981 [2024-11-26 21:06:38.794652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.794702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.807022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef20d8 00:25:47.981 [2024-11-26 21:06:38.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.808488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.820591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef92c0 00:25:47.981 [2024-11-26 21:06:38.822155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.822200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.834130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee7c50 00:25:47.981 [2024-11-26 21:06:38.835943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.835985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.847677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef2948 00:25:47.981 [2024-11-26 21:06:38.849577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.849607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.861102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efda78 00:25:47.981 [2024-11-26 21:06:38.863218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.863247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.870315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeee38 00:25:47.981 [2024-11-26 21:06:38.871223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.981 [2024-11-26 21:06:38.871269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.981 [2024-11-26 21:06:38.882674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efb8b8 00:25:47.982 [2024-11-26 21:06:38.883565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.982 [2024-11-26 21:06:38.883610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.982 [2024-11-26 21:06:38.896192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee1710 00:25:47.982 [2024-11-26 21:06:38.897244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.982 [2024-11-26 21:06:38.897289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:47.982 [2024-11-26 21:06:38.909774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee88f8 00:25:47.982 [2024-11-26 21:06:38.911027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.982 [2024-11-26 21:06:38.911059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.924288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee9168 00:25:48.240 [2024-11-26 21:06:38.925768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.925799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.937660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee95a0 00:25:48.240 [2024-11-26 21:06:38.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.949937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ef8a50 00:25:48.240 [2024-11-26 21:06:38.951514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.951559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.963448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eedd58 00:25:48.240 [2024-11-26 21:06:38.965196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.965240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.976945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016efc998 00:25:48.240 [2024-11-26 21:06:38.978941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.978996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.990536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee6b70 00:25:48.240 [2024-11-26 21:06:38.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:38.992662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:38.999766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eeb760 00:25:48.240 [2024-11-26 21:06:39.000664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:39.000718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:39.012049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee3498 00:25:48.240 [2024-11-26 21:06:39.012947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:39.012991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:39.025423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016eff3c8 00:25:48.240 [2024-11-26 21:06:39.026446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:39.026475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:39.038921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee0630 00:25:48.240 [2024-11-26 21:06:39.040189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.240 [2024-11-26 21:06:39.040236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.240 [2024-11-26 21:06:39.052488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bd50) with pdu=0x200016ee0a68 00:25:48.240 [2024-11-26 21:06:39.054002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:48.241 [2024-11-26 21:06:39.054031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:48.241 20423.00 IOPS, 79.78 MiB/s 00:25:48.241 Latency(us) 00:25:48.241 [2024-11-26T20:06:39.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:48.241 nvme0n1 : 2.01 20412.91 79.74 0.00 0.00 6260.55 2487.94 13689.74 00:25:48.241 [2024-11-26T20:06:39.179Z] =================================================================================================================== 00:25:48.241 [2024-11-26T20:06:39.179Z] Total : 20412.91 79.74 0.00 0.00 6260.55 2487.94 13689.74 00:25:48.241 { 00:25:48.241 "results": [ 00:25:48.241 { 00:25:48.241 "job": "nvme0n1", 00:25:48.241 "core_mask": "0x2", 00:25:48.241 "workload": "randwrite", 00:25:48.241 "status": "finished", 00:25:48.241 "queue_depth": 128, 00:25:48.241 "io_size": 4096, 00:25:48.241 "runtime": 2.007259, 00:25:48.241 "iops": 20412.91133829765, 00:25:48.241 "mibps": 79.7379349152252, 00:25:48.241 "io_failed": 0, 00:25:48.241 "io_timeout": 0, 00:25:48.241 "avg_latency_us": 6260.554680492959, 00:25:48.241 "min_latency_us": 2487.9407407407407, 00:25:48.241 "max_latency_us": 13689.742222222223 00:25:48.241 } 00:25:48.241 ], 00:25:48.241 "core_count": 1 00:25:48.241 } 00:25:48.241 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:48.241 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:48.241 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:48.241 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:48.241 | .driver_specific 00:25:48.241 | .nvme_error 00:25:48.241 | .status_code 00:25:48.241 | .command_transient_transport_error' 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4079222 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4079222 ']' 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4079222 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079222 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079222' 00:25:48.499 killing process with pid 4079222 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4079222 00:25:48.499 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.499 00:25:48.499 Latency(us) 00:25:48.499 [2024-11-26T20:06:39.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.499 [2024-11-26T20:06:39.437Z] =================================================================================================================== 00:25:48.499 [2024-11-26T20:06:39.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.499 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4079222 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4079628 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4079628 /var/tmp/bperf.sock 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4079628 ']' 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.756 21:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.756 [2024-11-26 21:06:39.651719] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:48.756 [2024-11-26 21:06:39.651794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079628 ] 00:25:48.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:48.756 Zero copy mechanism will not be used. 00:25:49.014 [2024-11-26 21:06:39.734717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.014 [2024-11-26 21:06:39.807761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.273 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.273 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:49.273 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:49.273 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.531 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:50.098 nvme0n1 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:50.098 21:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:50.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.098 Zero copy mechanism will not be used. 00:25:50.098 Running I/O for 2 seconds... 00:25:50.098 [2024-11-26 21:06:40.899190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.098 [2024-11-26 21:06:40.899343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.899388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.907155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.907379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.907417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.915445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.915609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.923289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.923478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.923511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.931614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.931867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.931898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.939935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.940149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.940180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.948071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.948257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.956326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.956565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.956596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.964068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.964367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.972359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.972573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.972610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.979781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.979901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.979932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.986932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.987052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.987081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:40.995163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:40.995310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:40.995343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:41.004152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:41.004339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:41.004372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:41.013206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:41.013425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:41.013458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:41.022006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:41.022128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:41.022161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.099 [2024-11-26 21:06:41.029515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.099 [2024-11-26 21:06:41.029622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.099 [2024-11-26 21:06:41.029655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.036832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.037024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.037058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.043701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.043827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.043857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.051349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.051572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.051605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.060129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.060328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.060361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.069332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.069470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.069503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.077900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.078122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.078154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.087187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.087406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.096217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.096374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.096407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.104644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.104789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.104818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.113580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.113796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.113826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.122739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.122958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.123005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.130773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.130955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.130984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.139651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.139898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.139927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.148628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.148795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.148826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.157703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.157935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.157964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.166740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.166841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.166870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.175082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.175234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.175267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.184184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.184405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.184438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.193105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.358 [2024-11-26 21:06:41.193328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.358 [2024-11-26 21:06:41.193365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.358 [2024-11-26 21:06:41.202110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.202330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.202363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.211180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.211404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.211437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.219780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.219964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.219993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.228212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.228430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.237343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.237603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.237636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.246425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.246647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.246680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.254449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.254653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.254695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.262836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.263029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.270974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.271188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.271218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.278290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.278418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.278449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.285346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.285466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.285496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.359 [2024-11-26 21:06:41.291591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.359 [2024-11-26 21:06:41.291704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.359 [2024-11-26 21:06:41.291734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.617 [2024-11-26 21:06:41.297459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.617 [2024-11-26 21:06:41.297595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.617 [2024-11-26 21:06:41.297624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.617 [2024-11-26 21:06:41.302947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.617 [2024-11-26 21:06:41.303248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.303278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.308697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.309062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.309107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.314875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.315195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.315225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.321197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.321502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.327186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.327494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.327524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.332846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.333158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.333187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.339074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.339382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.339411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.345314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.345642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.345672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.351759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.352058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.352088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.358984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.359436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.359466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.366244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.366575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.366605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.372430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.372750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.372780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.378140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.378449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.378484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.384894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.385279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.392357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.392802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.392832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.399861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.400264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.407616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.408019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.408048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.415912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.416264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.416310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.422876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.423244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.423274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.430759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.431088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.431117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.438610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.438936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.438966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.446239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.446578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.446609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.454500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.454990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.455045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.462999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.463423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.463454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.471293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.471732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.471778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.479863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.480372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.480421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.488301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.488719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.488764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.496367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.496901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.496931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.504664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.505131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.505179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.512890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.513292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.513322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.521502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.521972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.522004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.529873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.530313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.530349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.538551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.538942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.538972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.618 [2024-11-26 21:06:41.547112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.618 [2024-11-26 21:06:41.547533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.618 [2024-11-26 21:06:41.547566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.555911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.556350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.556384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.564320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.564768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.564798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.572347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.572698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.572749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.579171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.579514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.579546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.586258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.586604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.586643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.593662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.594003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.594049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.600573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.600916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.600945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.607335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.607674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.607735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.613650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.614008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.614055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.619982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.620334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.626159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.626530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.632324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.632661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.632700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.638519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.638886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.638916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.644891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.645224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.645254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.650662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.650996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.651042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.656979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.657325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.663475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.663801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.663831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.669944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.670284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.670313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.676262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.676565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.676595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.682430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.682749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.682778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.688082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.688394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.688424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.694223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.694573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.699570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.699902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.877 [2024-11-26 21:06:41.699932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.877 [2024-11-26 21:06:41.705325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.877 [2024-11-26 21:06:41.705576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.705606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.711221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.711504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.717314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.717576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.717605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.723373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.723705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.723736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.729181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.729475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.729505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.734877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.735233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.740381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.740713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.740741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.746116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.746407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.746442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.751717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.752017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.752046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.757532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.757849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.763136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.763440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.769039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.769329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.769358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.775033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.775337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.775366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.781518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.781855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.781886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.787658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.787958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.787987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.793465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.793771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.793801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.798902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.799235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.799264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.804380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.804749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.804778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.878 [2024-11-26 21:06:41.810054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:50.878 [2024-11-26 21:06:41.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.878 [2024-11-26 21:06:41.810384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.815545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.815845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.815876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.820982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.821272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.821301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.826849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.827149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.827179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.832895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.833211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.833240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.839425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.839828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.847293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.847638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.847667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.855248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.855637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.855667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.861777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.862068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.137 [2024-11-26 21:06:41.862098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.137 [2024-11-26 21:06:41.867536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.137 [2024-11-26 21:06:41.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.867854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.872922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.873215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.873244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.878652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.878950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.878980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.884136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.884460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.884491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.890231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.890582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.890611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.896087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.896405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.896434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 4294.00 IOPS, 536.75 MiB/s [2024-11-26T20:06:42.076Z] [2024-11-26 21:06:41.903779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.904002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.904036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.908607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.908833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.908862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.913310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.913559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.913587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.918349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.918534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.918562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.923283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.923495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.923523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.928207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.928427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.928455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.933178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.933378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.933406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.937748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.937977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.938005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.942431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.942641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.942670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.947459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.947674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.947709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.952280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.952512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.952540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.956815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.957021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.957049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.961615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.961825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.961854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.966413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.966645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.966674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.971156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.971359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.971388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.975844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.976085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.980484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.980738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.980766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.985133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.985371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.985399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.989798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.990008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.990036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.994309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.994547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:41.998879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:41.999098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.138 [2024-11-26 21:06:41.999126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.138 [2024-11-26 21:06:42.003734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.138 [2024-11-26 21:06:42.003952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.003981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.008361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.008571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.008599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.013061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.013264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.013293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.017748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.017958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.017987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.022405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.022632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.022661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.027205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.027409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.027442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.031907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.032123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.032152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.036537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.036774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.036803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.041605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.041819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.046307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.046519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.046547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.050876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.051119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.051147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.055434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.055641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.055669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.060005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.060227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.060256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.065235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.065449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.065478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.139 [2024-11-26 21:06:42.069981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.139 [2024-11-26 21:06:42.070187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.139 [2024-11-26 21:06:42.070216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.074915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.075137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.075166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.079835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.080047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.080076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.084488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.084736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.084765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.089128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.089337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.089365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.093805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.094011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.094039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.098675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.098913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.103500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.103736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.103765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.108528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.108753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.108782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.114088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.114290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.114318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.118970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.124199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.124398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.124426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.128951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.129149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.129178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.398 [2024-11-26 21:06:42.133618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.398 [2024-11-26 21:06:42.133833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.398 [2024-11-26 21:06:42.133861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.138281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.138491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.138519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.143520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.143850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.143879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.149840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.150177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.150205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.157119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.157359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.164617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.164963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.164993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.171880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.172082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.178439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.178788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.178821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.185839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.186106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.186134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.193378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.193783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.193811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.201080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.201316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.201345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.208317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.208569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.208597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.214721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.215025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.215053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.222135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.222366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.222394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.228864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.229058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.229087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.234834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.235146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.235175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.241706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.242006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.242034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.249118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.249475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.249505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.256428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.256782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.262502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.262736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.262765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.267431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.267649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.267677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.272365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.272589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.272617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.277141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.277351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.277379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.282017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.282222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.282250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.286571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.286782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.286810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.291457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.291694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.291722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.296073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.296303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.296331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.300947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.399 [2024-11-26 21:06:42.301182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.399 [2024-11-26 21:06:42.301210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.399 [2024-11-26 21:06:42.305728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.305956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.305984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.310275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.310480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.310508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.315186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.315392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.315427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.320185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.320401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.320429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.324903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.325143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.325171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.329892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.330129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.330157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.400 [2024-11-26 21:06:42.334844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.400 [2024-11-26 21:06:42.335049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.400 [2024-11-26 21:06:42.335078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.339572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.339814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.339843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.344262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.344510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.348915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.349152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.349180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.353667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.353885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.353914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.358350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.358545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.358574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.363253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.363454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.363482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.367879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.368087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.368116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.372495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.372707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.372737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.377501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.377772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.382794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.383013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.388098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.388305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.388333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.393616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.393863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.393892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.399125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.399362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.399390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.404757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.404953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.404981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.410503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.410712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.410741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.416239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.416475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.416503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.422496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.422721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.422751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.429695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.429866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.429895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.437041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.437370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.437398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.444452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.444704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.444733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.452040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.452258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.452287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.459376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.659 [2024-11-26 21:06:42.459601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.659 [2024-11-26 21:06:42.459636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.659 [2024-11-26 21:06:42.466824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.467085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.467114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.474224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.474539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.474568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.481439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.481712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.481741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.488706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.488910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.488939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.495993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.496241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.503443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.503712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.503742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.510818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.511105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.511135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.517648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.517978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.518008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.525146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.525469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.525498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.532832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.533132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.533162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.540436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.540655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.547324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.547662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.554631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.554874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.554903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.562246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.562500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.569351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.569648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.569680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.577011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.577309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.577340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.584366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.584585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.584615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.660 [2024-11-26 21:06:42.592342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.660 [2024-11-26 21:06:42.592596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.660 [2024-11-26 21:06:42.592629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.599816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.600087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.600119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.607555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.607796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.607825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.614533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.614890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.614919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.622375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.622642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.622672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.629906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.630221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.637765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.638118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.645188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.645436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.645467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.652056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.652271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.652307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.660138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.660452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.660484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.667920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.668271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.668301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.674981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.681609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.681869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.681899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.687662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.687954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.687984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.693819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.694071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.694103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.700170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.700462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.700491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.707757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.708037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.713282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.713579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.718594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.718862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.724083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.724350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.724379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.730818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.731127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.731156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.737377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.737646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.744642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.744956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.744986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.751983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.752296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.752326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.759108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.759439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.759468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.766469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.766807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.766836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.920 [2024-11-26 21:06:42.773798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.920 [2024-11-26 21:06:42.774156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.920 [2024-11-26 21:06:42.774185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.781405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.781765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.781796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.788413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.788704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.788733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.794005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.794239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.794268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.799229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.799523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.799553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.804800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.805042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.805071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.809839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.810125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.810154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.816263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.816609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.816637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.822750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.823078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.829737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.829918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.829947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.837308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.837653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.837683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.844720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.845070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.845100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:51.921 [2024-11-26 21:06:42.852455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:51.921 [2024-11-26 21:06:42.852771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.921 [2024-11-26 21:06:42.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.179 [2024-11-26 21:06:42.859582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.179 [2024-11-26 21:06:42.859873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.179 [2024-11-26 21:06:42.859903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.179 [2024-11-26 21:06:42.866594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.866870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.866900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.180 [2024-11-26 21:06:42.873064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.873377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.873406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.180 [2024-11-26 21:06:42.880297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.880579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.880608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.180 [2024-11-26 21:06:42.887600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.887960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.887989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.180 [2024-11-26 21:06:42.895047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.895410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.895438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.180 [2024-11-26 21:06:42.902494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188c090) with pdu=0x200016eff3c8 00:25:52.180 [2024-11-26 21:06:42.903838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.180 [2024-11-26 21:06:42.903868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.180 4742.00 IOPS, 592.75 MiB/s 00:25:52.180 Latency(us) 00:25:52.180 [2024-11-26T20:06:43.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.180 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:52.180 nvme0n1 : 2.01 4738.08 592.26 0.00 0.00 3368.31 2087.44 9369.22 00:25:52.180 [2024-11-26T20:06:43.118Z] =================================================================================================================== 00:25:52.180 [2024-11-26T20:06:43.118Z] Total : 4738.08 592.26 0.00 0.00 3368.31 2087.44 9369.22 00:25:52.180 { 00:25:52.180 "results": [ 00:25:52.180 { 00:25:52.180 "job": "nvme0n1", 00:25:52.180 "core_mask": "0x2", 00:25:52.180 "workload": "randwrite", 00:25:52.180 "status": "finished", 00:25:52.180 "queue_depth": 16, 00:25:52.180 "io_size": 131072, 00:25:52.180 "runtime": 2.00503, 00:25:52.180 "iops": 4738.083719445594, 00:25:52.180 "mibps": 592.2604649306993, 00:25:52.180 "io_failed": 0, 00:25:52.180 "io_timeout": 0, 00:25:52.180 "avg_latency_us": 3368.3147003508775, 00:25:52.180 "min_latency_us": 2087.442962962963, 00:25:52.180 "max_latency_us": 9369.22074074074 00:25:52.180 } 00:25:52.180 ], 00:25:52.180 "core_count": 1 00:25:52.180 } 00:25:52.180 21:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:52.180 21:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:52.180 21:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:52.180 | .driver_specific 00:25:52.180 | .nvme_error 00:25:52.180 | .status_code 00:25:52.180 | .command_transient_transport_error' 00:25:52.180 21:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 307 > 0 )) 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4079628 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4079628 ']' 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4079628 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079628 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079628' 00:25:52.438 killing process with pid 4079628 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4079628 00:25:52.438 Received shutdown signal, test time was about 2.000000 seconds 00:25:52.438 00:25:52.438 Latency(us) 00:25:52.438 [2024-11-26T20:06:43.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.438 [2024-11-26T20:06:43.376Z] =================================================================================================================== 00:25:52.438 [2024-11-26T20:06:43.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.438 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4079628 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4078258 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4078258 ']' 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4078258 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078258 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078258' 00:25:52.696 killing process with pid 4078258 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4078258 00:25:52.696 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4078258 00:25:52.955 00:25:52.955 real 0m15.677s 00:25:52.955 user 0m31.277s 00:25:52.955 sys 0m4.218s 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.955 ************************************ 00:25:52.955 END TEST nvmf_digest_error 00:25:52.955 ************************************ 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.955 rmmod nvme_tcp 00:25:52.955 rmmod nvme_fabrics 00:25:52.955 rmmod nvme_keyring 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4078258 ']' 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4078258 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 4078258 ']' 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 4078258 00:25:52.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4078258) - No such process 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 4078258 is not found' 00:25:52.955 Process with pid 4078258 is not found 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.955 21:06:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.489 00:25:55.489 real 0m36.103s 00:25:55.489 user 1m4.417s 00:25:55.489 sys 0m9.973s 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.489 ************************************ 00:25:55.489 END TEST nvmf_digest 00:25:55.489 ************************************ 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.489 ************************************ 00:25:55.489 START TEST nvmf_bdevperf 00:25:55.489 ************************************ 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:55.489 * Looking for test storage... 00:25:55.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:55.489 21:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.489 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:55.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.490 --rc genhtml_branch_coverage=1 00:25:55.490 --rc genhtml_function_coverage=1 00:25:55.490 --rc genhtml_legend=1 00:25:55.490 --rc geninfo_all_blocks=1 00:25:55.490 --rc geninfo_unexecuted_blocks=1 00:25:55.490 00:25:55.490 ' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:55.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.490 --rc genhtml_branch_coverage=1 00:25:55.490 --rc genhtml_function_coverage=1 00:25:55.490 --rc genhtml_legend=1 00:25:55.490 --rc geninfo_all_blocks=1 00:25:55.490 --rc geninfo_unexecuted_blocks=1 00:25:55.490 00:25:55.490 ' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:55.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.490 --rc genhtml_branch_coverage=1 00:25:55.490 --rc genhtml_function_coverage=1 00:25:55.490 --rc genhtml_legend=1 00:25:55.490 --rc geninfo_all_blocks=1 00:25:55.490 --rc geninfo_unexecuted_blocks=1 00:25:55.490 00:25:55.490 ' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:55.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.490 --rc genhtml_branch_coverage=1 00:25:55.490 --rc genhtml_function_coverage=1 00:25:55.490 --rc genhtml_legend=1 00:25:55.490 --rc geninfo_all_blocks=1 00:25:55.490 --rc geninfo_unexecuted_blocks=1 00:25:55.490 00:25:55.490 ' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.490 21:06:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.393 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:25:57.394 00:25:57.394 --- 10.0.0.2 ping statistics --- 00:25:57.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.394 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:57.394 00:25:57.394 --- 10.0.0.1 ping statistics --- 00:25:57.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.394 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4082008 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4082008 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4082008 ']' 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.394 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.652 [2024-11-26 21:06:48.358507] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:57.652 [2024-11-26 21:06:48.358601] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.652 [2024-11-26 21:06:48.438334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:57.652 [2024-11-26 21:06:48.503077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.652 [2024-11-26 21:06:48.503146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.652 [2024-11-26 21:06:48.503172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.652 [2024-11-26 21:06:48.503186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.652 [2024-11-26 21:06:48.503198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.652 [2024-11-26 21:06:48.504759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.652 [2024-11-26 21:06:48.504784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.652 [2024-11-26 21:06:48.504788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 [2024-11-26 21:06:48.665787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 Malloc0 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.911 [2024-11-26 21:06:48.729173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.911 { 00:25:57.911 "params": { 00:25:57.911 "name": "Nvme$subsystem", 00:25:57.911 "trtype": "$TEST_TRANSPORT", 00:25:57.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.911 "adrfam": "ipv4", 00:25:57.911 "trsvcid": "$NVMF_PORT", 00:25:57.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.911 "hdgst": ${hdgst:-false}, 00:25:57.911 "ddgst": ${ddgst:-false} 00:25:57.911 }, 00:25:57.911 "method": "bdev_nvme_attach_controller" 00:25:57.911 } 00:25:57.911 EOF 00:25:57.911 )") 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:57.911 21:06:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:57.911 "params": { 00:25:57.911 "name": "Nvme1", 00:25:57.911 "trtype": "tcp", 00:25:57.911 "traddr": "10.0.0.2", 00:25:57.911 "adrfam": "ipv4", 00:25:57.911 "trsvcid": "4420", 00:25:57.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.911 "hdgst": false, 00:25:57.911 "ddgst": false 00:25:57.911 }, 00:25:57.911 "method": "bdev_nvme_attach_controller" 00:25:57.911 }' 00:25:57.911 [2024-11-26 21:06:48.781070] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:57.911 [2024-11-26 21:06:48.781149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082133 ] 00:25:57.911 [2024-11-26 21:06:48.848322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.169 [2024-11-26 21:06:48.909056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.169 Running I/O for 1 seconds... 00:25:59.551 8048.00 IOPS, 31.44 MiB/s 00:25:59.551 Latency(us) 00:25:59.551 [2024-11-26T20:06:50.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.552 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.552 Verification LBA range: start 0x0 length 0x4000 00:25:59.552 Nvme1n1 : 1.00 8137.47 31.79 0.00 0.00 15667.02 1832.58 15825.73 00:25:59.552 [2024-11-26T20:06:50.490Z] =================================================================================================================== 00:25:59.552 [2024-11-26T20:06:50.490Z] Total : 8137.47 31.79 0.00 0.00 15667.02 1832.58 15825.73 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4082281 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.552 { 00:25:59.552 "params": { 00:25:59.552 "name": "Nvme$subsystem", 00:25:59.552 "trtype": "$TEST_TRANSPORT", 00:25:59.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.552 "adrfam": "ipv4", 00:25:59.552 "trsvcid": "$NVMF_PORT", 00:25:59.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.552 "hdgst": ${hdgst:-false}, 00:25:59.552 "ddgst": ${ddgst:-false} 00:25:59.552 }, 00:25:59.552 "method": "bdev_nvme_attach_controller" 00:25:59.552 } 00:25:59.552 EOF 00:25:59.552 )") 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:59.552 21:06:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:59.552 "params": { 00:25:59.552 "name": "Nvme1", 00:25:59.552 "trtype": "tcp", 00:25:59.552 "traddr": "10.0.0.2", 00:25:59.552 "adrfam": "ipv4", 00:25:59.552 "trsvcid": "4420", 00:25:59.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.552 "hdgst": false, 00:25:59.552 "ddgst": false 00:25:59.552 }, 00:25:59.552 "method": "bdev_nvme_attach_controller" 00:25:59.552 }' 00:25:59.552 [2024-11-26 21:06:50.375044] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:25:59.552 [2024-11-26 21:06:50.375130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082281 ] 00:25:59.552 [2024-11-26 21:06:50.442500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.810 [2024-11-26 21:06:50.501707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.067 Running I/O for 15 seconds... 00:26:01.936 8369.00 IOPS, 32.69 MiB/s [2024-11-26T20:06:53.442Z] 8421.00 IOPS, 32.89 MiB/s [2024-11-26T20:06:53.442Z] 21:06:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4082008 00:26:02.504 21:06:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:02.504 [2024-11-26 21:06:53.339757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-11-26 21:06:53.339808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.504 [2024-11-26 21:06:53.339842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-11-26 21:06:53.339859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.504 [2024-11-26 21:06:53.339877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-11-26 21:06:53.339893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.504 [2024-11-26 21:06:53.339910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.339926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.339942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.339956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.339989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.340945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.340982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.341042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-11-26 21:06:53.341095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-11-26 21:06:53.341573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.505 [2024-11-26 21:06:53.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.341944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.341978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.506 [2024-11-26 21:06:53.342919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-11-26 21:06:53.342932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.342947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.342960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.342994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.343010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-11-26 21:06:53.343955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.507 [2024-11-26 21:06:53.344294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-11-26 21:06:53.344309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-11-26 21:06:53.344342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-11-26 21:06:53.344375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-11-26 21:06:53.344407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-11-26 21:06:53.344446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-11-26 21:06:53.344479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cc880 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.344515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:02.508 [2024-11-26 21:06:53.344529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:02.508 [2024-11-26 21:06:53.344542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38936 len:8 PRP1 0x0 PRP2 0x0 00:26:02.508 [2024-11-26 21:06:53.344556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.508 [2024-11-26 21:06:53.344734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.508 [2024-11-26 21:06:53.344781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.508 [2024-11-26 21:06:53.344808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.508 [2024-11-26 21:06:53.344835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-11-26 21:06:53.344848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.348617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.348671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.349373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.349438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.349678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.349930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.349953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.349996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.350016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.362868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.363287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.363321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.363342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.363580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.363831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.363852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.363866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.363878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.376724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.377157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.377190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.377209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.377447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.377701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.377727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.377743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.377757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.390586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.391015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.391065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.391305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.391548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.391573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.391589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.391603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.404441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.404879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.404912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.404939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.405177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.405421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.405445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.405461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.405476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.418305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.418698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.418740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.418758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.419000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.419243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.419267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.419283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.508 [2024-11-26 21:06:53.419297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.508 [2024-11-26 21:06:53.432146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.508 [2024-11-26 21:06:53.432573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.508 [2024-11-26 21:06:53.432605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.508 [2024-11-26 21:06:53.432623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.508 [2024-11-26 21:06:53.432872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.508 [2024-11-26 21:06:53.433116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.508 [2024-11-26 21:06:53.433141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.508 [2024-11-26 21:06:53.433157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.509 [2024-11-26 21:06:53.433171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.446016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.446439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.768 [2024-11-26 21:06:53.446472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.768 [2024-11-26 21:06:53.446490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.768 [2024-11-26 21:06:53.446745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.768 [2024-11-26 21:06:53.446988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.768 [2024-11-26 21:06:53.447015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.768 [2024-11-26 21:06:53.447031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.768 [2024-11-26 21:06:53.447047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.459903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.460301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.768 [2024-11-26 21:06:53.460334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.768 [2024-11-26 21:06:53.460352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.768 [2024-11-26 21:06:53.460591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.768 [2024-11-26 21:06:53.460860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.768 [2024-11-26 21:06:53.460887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.768 [2024-11-26 21:06:53.460903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.768 [2024-11-26 21:06:53.460918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.473760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.474172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.768 [2024-11-26 21:06:53.474205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.768 [2024-11-26 21:06:53.474224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.768 [2024-11-26 21:06:53.474462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.768 [2024-11-26 21:06:53.474719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.768 [2024-11-26 21:06:53.474746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.768 [2024-11-26 21:06:53.474761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.768 [2024-11-26 21:06:53.474778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.487605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.488051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.768 [2024-11-26 21:06:53.488084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.768 [2024-11-26 21:06:53.488102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.768 [2024-11-26 21:06:53.488340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.768 [2024-11-26 21:06:53.488583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.768 [2024-11-26 21:06:53.488614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.768 [2024-11-26 21:06:53.488632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.768 [2024-11-26 21:06:53.488647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.501497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.768 [2024-11-26 21:06:53.501952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.768 [2024-11-26 21:06:53.501971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.768 [2024-11-26 21:06:53.502208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.768 [2024-11-26 21:06:53.502452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.768 [2024-11-26 21:06:53.502477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.768 [2024-11-26 21:06:53.502493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.768 [2024-11-26 21:06:53.502508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.768 [2024-11-26 21:06:53.515356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.768 [2024-11-26 21:06:53.515759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.515793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.515812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.516051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.516294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.516320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.516336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.516352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.529216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.529629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.529662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.529680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.529930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.530174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.530198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.530214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.530235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.543090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.543503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.543535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.543554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.543804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.544049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.544074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.544090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.544105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.556936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.557349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.557399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.557638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.557893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.557919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.557935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.557950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.570779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.571161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.571212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.571450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.571704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.571730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.571745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.571760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.584781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.585193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.585227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.585246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.585491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.585760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.585788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.585805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.585821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.598798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.599198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.599233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.599253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.599498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.599767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.599805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.599821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.599837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.612859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.613322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.613341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.613580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.613840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.613867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.613882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.613897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.626774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.627219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.627238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.627483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.627741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.627767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.627783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.627797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.640645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.769 [2024-11-26 21:06:53.641073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.769 [2024-11-26 21:06:53.641105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.769 [2024-11-26 21:06:53.641124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.769 [2024-11-26 21:06:53.641362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.769 [2024-11-26 21:06:53.641606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.769 [2024-11-26 21:06:53.641630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.769 [2024-11-26 21:06:53.641646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.769 [2024-11-26 21:06:53.641660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.769 [2024-11-26 21:06:53.654513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.770 [2024-11-26 21:06:53.654946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.770 [2024-11-26 21:06:53.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.770 [2024-11-26 21:06:53.654996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.770 [2024-11-26 21:06:53.655234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.770 [2024-11-26 21:06:53.655478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.770 [2024-11-26 21:06:53.655503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.770 [2024-11-26 21:06:53.655519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.770 [2024-11-26 21:06:53.655534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.770 [2024-11-26 21:06:53.668429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.770 [2024-11-26 21:06:53.668844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.770 [2024-11-26 21:06:53.668877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.770 [2024-11-26 21:06:53.668897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.770 [2024-11-26 21:06:53.669149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.770 [2024-11-26 21:06:53.669401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.770 [2024-11-26 21:06:53.669433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.770 [2024-11-26 21:06:53.669451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.770 [2024-11-26 21:06:53.669467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.770 [2024-11-26 21:06:53.682517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.770 [2024-11-26 21:06:53.682967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.770 [2024-11-26 21:06:53.683002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.770 [2024-11-26 21:06:53.683021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.770 [2024-11-26 21:06:53.683268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.770 [2024-11-26 21:06:53.683518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.770 [2024-11-26 21:06:53.683550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.770 [2024-11-26 21:06:53.683570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.770 [2024-11-26 21:06:53.683586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:02.770 [2024-11-26 21:06:53.696422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:02.770 [2024-11-26 21:06:53.696866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.770 [2024-11-26 21:06:53.696901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:02.770 [2024-11-26 21:06:53.696920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:02.770 [2024-11-26 21:06:53.697170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:02.770 [2024-11-26 21:06:53.697415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:02.770 [2024-11-26 21:06:53.697441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:02.770 [2024-11-26 21:06:53.697457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:02.770 [2024-11-26 21:06:53.697472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.710331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.710747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.710780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.710798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.711036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.711278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.711304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.711320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.711342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.724199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.724587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.724620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.724638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.724886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.725131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.725157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.725174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.725189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.738239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.738648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.738681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.738712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.738952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.739195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.739220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.739236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.739252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.752086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.752494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.752526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.752545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.752798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.753042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.753067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.753084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.753099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.765961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.766379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.766411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.766429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.766667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.766926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.766953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.766968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.766983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.779819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.780228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.780260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.780278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.780516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.780773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.780799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.780816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.780831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.793660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.794069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.794102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.794121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.794360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.794605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.794630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.794646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.794662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.807501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.807894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.807927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.807945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.808190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.808433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.808458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.808474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.808490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 [2024-11-26 21:06:53.821533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.821937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.821970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.821988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.822225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.822469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.030 [2024-11-26 21:06:53.822494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.030 [2024-11-26 21:06:53.822510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.030 [2024-11-26 21:06:53.822526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.030 7069.00 IOPS, 27.61 MiB/s [2024-11-26T20:06:53.968Z] [2024-11-26 21:06:53.835461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.030 [2024-11-26 21:06:53.835893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.030 [2024-11-26 21:06:53.835927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.030 [2024-11-26 21:06:53.835945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.030 [2024-11-26 21:06:53.836183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.030 [2024-11-26 21:06:53.836426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.836452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.836469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.836484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.849326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.849720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.849752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.849770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.850009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.850251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.850286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.850303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.850319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.863369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.863791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.863824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.863842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.864082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.864325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.864350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.864367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.864382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.877226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.877606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.877638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.877657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.877909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.878152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.878178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.878194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.878210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.891262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.891676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.891718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.891737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.891976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.892219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.892245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.892261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.892282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.905118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.905525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.905557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.905575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.905828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.906073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.906098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.906115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.906130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.918985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.919415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.919433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.919672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.919929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.919954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.919969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.919984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.932860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.933281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.933314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.933333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.933570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.933838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.933864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.933880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.933895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.946752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.947170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.947202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.947220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.947458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.947714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.947751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.947767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.947782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.031 [2024-11-26 21:06:53.960623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.031 [2024-11-26 21:06:53.961082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.031 [2024-11-26 21:06:53.961114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.031 [2024-11-26 21:06:53.961132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.031 [2024-11-26 21:06:53.961370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.031 [2024-11-26 21:06:53.961614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.031 [2024-11-26 21:06:53.961639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.031 [2024-11-26 21:06:53.961656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.031 [2024-11-26 21:06:53.961671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.291 [2024-11-26 21:06:53.974529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.291 [2024-11-26 21:06:53.974969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.291 [2024-11-26 21:06:53.975000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.291 [2024-11-26 21:06:53.975019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.291 [2024-11-26 21:06:53.975257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.291 [2024-11-26 21:06:53.975502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.291 [2024-11-26 21:06:53.975527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.291 [2024-11-26 21:06:53.975543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.291 [2024-11-26 21:06:53.975558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.291 [2024-11-26 21:06:53.988428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.291 [2024-11-26 21:06:53.988857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.291 [2024-11-26 21:06:53.988889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.291 [2024-11-26 21:06:53.988914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.291 [2024-11-26 21:06:53.989153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.291 [2024-11-26 21:06:53.989396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.291 [2024-11-26 21:06:53.989420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.291 [2024-11-26 21:06:53.989436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.291 [2024-11-26 21:06:53.989451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.291 [2024-11-26 21:06:54.002313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.291 [2024-11-26 21:06:54.002702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.291 [2024-11-26 21:06:54.002735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.291 [2024-11-26 21:06:54.002754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.291 [2024-11-26 21:06:54.002992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.291 [2024-11-26 21:06:54.003236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.291 [2024-11-26 21:06:54.003261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.291 [2024-11-26 21:06:54.003277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.291 [2024-11-26 21:06:54.003292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.016180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.016578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.016610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.016628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.016877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.017122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.017147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.017163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.017177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.030066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.030476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.030508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.030526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.030776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.031020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.031051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.031068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.031083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.043954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.044363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.044395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.044413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.044650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.044906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.044932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.044948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.044963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.057829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.058210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.058260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.058499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.058768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.058793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.058810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.058824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.071680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.072097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.072130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.072148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.072387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.072630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.072655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.072671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.072703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.085608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.086023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.086075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.086313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.086557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.086582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.086598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.086612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.099467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.099868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.099900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.099919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.100157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.100401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.100426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.100443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.100458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.113330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.113742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.113775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.113793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.114031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.114275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.114299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.114315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.114330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.127215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.127651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.127683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.127714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.127953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.128197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.128221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.128237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.292 [2024-11-26 21:06:54.128252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.292 [2024-11-26 21:06:54.141115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.292 [2024-11-26 21:06:54.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.292 [2024-11-26 21:06:54.141530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.292 [2024-11-26 21:06:54.141549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.292 [2024-11-26 21:06:54.141802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.292 [2024-11-26 21:06:54.142046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.292 [2024-11-26 21:06:54.142071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.292 [2024-11-26 21:06:54.142087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.142101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.154978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.155399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.155441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.155459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.155710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.155953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.155980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.155996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.156011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.168865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.169388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.169448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.169466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.169724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.169968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.169994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.170010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.170024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.182880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.183268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.183301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.183320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.183559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.183818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.183845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.183862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.183877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.196730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.197150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.197202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.197442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.197699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.197725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.197741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.197757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.210619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.211094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.211113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.211352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.211598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.211628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.211645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.211661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.293 [2024-11-26 21:06:54.224520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.293 [2024-11-26 21:06:54.224942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.293 [2024-11-26 21:06:54.224975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.293 [2024-11-26 21:06:54.224993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.293 [2024-11-26 21:06:54.225230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.293 [2024-11-26 21:06:54.225474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.293 [2024-11-26 21:06:54.225499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.293 [2024-11-26 21:06:54.225515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.293 [2024-11-26 21:06:54.225531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.552 [2024-11-26 21:06:54.238380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.552 [2024-11-26 21:06:54.238790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-26 21:06:54.238822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.552 [2024-11-26 21:06:54.238840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.552 [2024-11-26 21:06:54.239079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.552 [2024-11-26 21:06:54.239322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.552 [2024-11-26 21:06:54.239348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.552 [2024-11-26 21:06:54.239364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.552 [2024-11-26 21:06:54.239380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.552 [2024-11-26 21:06:54.252218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.552 [2024-11-26 21:06:54.252628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-26 21:06:54.252660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.552 [2024-11-26 21:06:54.252678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.552 [2024-11-26 21:06:54.252931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.552 [2024-11-26 21:06:54.253174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.552 [2024-11-26 21:06:54.253199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.253215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.253235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.266071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.266459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.266493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.266511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.266764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.267008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.267033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.267050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.267065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.279907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.280324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.280357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.280376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.280615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.280872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.280899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.280916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.280931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.293779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.294176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.294209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.294227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.294465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.294721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.294747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.294763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.294778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.307610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.308010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.308042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.308061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.308298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.308541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.308566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.308583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.308598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.321654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.322092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.322125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.322143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.322381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.322624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.322650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.322666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.322681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.335542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.335967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.336000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.336018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.336256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.336499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.336525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.336541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.336556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.349401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.349821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.349854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.349874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.350119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.350361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.350385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.553 [2024-11-26 21:06:54.350401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.553 [2024-11-26 21:06:54.350415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.553 [2024-11-26 21:06:54.363270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.553 [2024-11-26 21:06:54.363678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-26 21:06:54.363787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.553 [2024-11-26 21:06:54.363807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.553 [2024-11-26 21:06:54.364047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.553 [2024-11-26 21:06:54.364290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.553 [2024-11-26 21:06:54.364315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.364332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.364347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.377192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.377604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.377637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.377655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.377902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.378145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.378171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.378188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.378203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.391097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.391480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.391513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.391532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.391783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.392028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.392059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.392076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.392092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.404944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.405342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.405374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.405392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.405630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.405886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.405911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.405927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.405941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.418801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.419326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.419384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.419402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.419639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.419894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.419921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.419937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.419952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.432818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.433230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.433262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.433280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.433518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.433775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.433801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.433818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.433838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.446670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.447096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.447128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.447147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.447385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.447629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.447653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.447669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.447694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.460549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.460975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.461008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.461026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.461264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.461506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.461531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.461547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.461562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.554 [2024-11-26 21:06:54.474399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.554 [2024-11-26 21:06:54.474824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-26 21:06:54.474856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.554 [2024-11-26 21:06:54.474875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.554 [2024-11-26 21:06:54.475114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.554 [2024-11-26 21:06:54.475358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.554 [2024-11-26 21:06:54.475384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.554 [2024-11-26 21:06:54.475399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.554 [2024-11-26 21:06:54.475415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.555 [2024-11-26 21:06:54.488269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.555 [2024-11-26 21:06:54.488679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-26 21:06:54.488719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.555 [2024-11-26 21:06:54.488738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.555 [2024-11-26 21:06:54.488976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.814 [2024-11-26 21:06:54.489221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.814 [2024-11-26 21:06:54.489247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.814 [2024-11-26 21:06:54.489263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.814 [2024-11-26 21:06:54.489278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.814 [2024-11-26 21:06:54.502128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.814 [2024-11-26 21:06:54.502575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.814 [2024-11-26 21:06:54.502626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.814 [2024-11-26 21:06:54.502645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.814 [2024-11-26 21:06:54.502891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.814 [2024-11-26 21:06:54.503135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.814 [2024-11-26 21:06:54.503160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.814 [2024-11-26 21:06:54.503176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.814 [2024-11-26 21:06:54.503191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.814 [2024-11-26 21:06:54.516054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.814 [2024-11-26 21:06:54.516477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.814 [2024-11-26 21:06:54.516510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.814 [2024-11-26 21:06:54.516528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.814 [2024-11-26 21:06:54.516780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.814 [2024-11-26 21:06:54.517023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.814 [2024-11-26 21:06:54.517049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.517065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.517080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.529945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.530329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.530362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.530380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.530628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.530886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.530912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.530929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.530944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.543995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.544412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.544445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.544463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.544716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.544971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.545002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.545019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.545034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.557877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.558259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.558309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.558547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.558806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.558832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.558849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.558864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.571909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.572307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.572339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.572357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.572595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.572850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.572882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.572900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.572917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.585745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.586155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.586188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.586207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.586446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.586702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.586728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.586745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.586760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.599586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.599992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.600025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.600043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.600283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.600527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.600552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.600568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.600583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.613175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.613564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.613580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.613816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.614050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.614072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.614084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.614101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.626511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.626900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.626930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.626946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.627187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.627397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.627418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.627432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.627444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.639792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.640150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.640179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.640196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.640433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.640646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.815 [2024-11-26 21:06:54.640682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.815 [2024-11-26 21:06:54.640706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.815 [2024-11-26 21:06:54.640721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.815 [2024-11-26 21:06:54.652959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.815 [2024-11-26 21:06:54.653412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.815 [2024-11-26 21:06:54.653442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.815 [2024-11-26 21:06:54.653458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.815 [2024-11-26 21:06:54.653722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.815 [2024-11-26 21:06:54.653929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.653950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.653979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.653993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.666166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.666583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.666612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.666628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.666893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.667124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.667145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.667158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.667170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.679373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.679807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.679837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.679853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.680094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.680303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.680324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.680337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.680349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.692654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.693028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.693071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.693087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.693288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.693512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.693533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.693547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.693559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.706046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.706448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.706476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.706492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.706745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.706946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.706966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.706979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.706991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.719462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.719899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.719929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.719946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.720187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.720402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.720423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.720436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.720449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.732913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.733315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.733344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.733361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.733606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.733855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.733879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.733894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.733907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:03.816 [2024-11-26 21:06:54.746411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:03.816 [2024-11-26 21:06:54.746833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.816 [2024-11-26 21:06:54.746866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:03.816 [2024-11-26 21:06:54.746888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:03.816 [2024-11-26 21:06:54.747124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:03.816 [2024-11-26 21:06:54.747366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:03.816 [2024-11-26 21:06:54.747394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:03.816 [2024-11-26 21:06:54.747424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:03.816 [2024-11-26 21:06:54.747439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.759956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.760367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.760415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.760660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.760906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.760928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.760942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.760956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.773411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.773737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.773767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.773784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.774014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.774230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.774251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.774264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.774276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.786752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.787181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.787209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.787225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.787460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.787671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.787727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.787741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.787760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.800147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.800518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.800546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.800562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.800831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.801050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.801070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.801083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.801095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.813507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.813890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.813924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.813942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.814195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.814396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.814418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.814432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.814444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 [2024-11-26 21:06:54.826904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.827312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.827348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.827369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.827629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.827877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.827904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.076 [2024-11-26 21:06:54.827922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.076 [2024-11-26 21:06:54.827937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.076 5301.75 IOPS, 20.71 MiB/s [2024-11-26T20:06:55.014Z] [2024-11-26 21:06:54.840278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.076 [2024-11-26 21:06:54.840662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.076 [2024-11-26 21:06:54.840702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.076 [2024-11-26 21:06:54.840733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.076 [2024-11-26 21:06:54.840980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.076 [2024-11-26 21:06:54.841195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.076 [2024-11-26 21:06:54.841216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.841230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.841242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.853412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.853804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.853833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.853850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.854091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.854302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.854323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.854336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.854348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.866994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.867383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.867411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.867427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.867640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.867888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.867911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.867926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.867939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.880344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.880647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.880696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.880723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.880947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.881177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.881198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.881211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.881223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.893654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.894155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.894185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.894202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.894455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.894664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.894708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.894722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.894751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.906995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.907371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.907400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.907417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.907657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.907900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.907922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.907936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.907949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.920172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.920544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.920573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.920589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.920855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.921094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.921115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.921128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.921140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.933408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.933730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.933758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.933775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.933983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.934212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.934233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.934246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.934258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.946733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.947185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.947214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.947230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.947469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.947701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.947724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.947740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.947752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.960065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.960486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.960516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.960532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.960771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.961007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.077 [2024-11-26 21:06:54.961039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.077 [2024-11-26 21:06:54.961052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.077 [2024-11-26 21:06:54.961069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.077 [2024-11-26 21:06:54.973389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.077 [2024-11-26 21:06:54.973787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.077 [2024-11-26 21:06:54.973817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.077 [2024-11-26 21:06:54.973832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.077 [2024-11-26 21:06:54.974087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.077 [2024-11-26 21:06:54.974281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.078 [2024-11-26 21:06:54.974302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.078 [2024-11-26 21:06:54.974314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.078 [2024-11-26 21:06:54.974326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.078 [2024-11-26 21:06:54.986737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.078 [2024-11-26 21:06:54.987177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.078 [2024-11-26 21:06:54.987207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.078 [2024-11-26 21:06:54.987224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.078 [2024-11-26 21:06:54.987465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.078 [2024-11-26 21:06:54.987699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.078 [2024-11-26 21:06:54.987721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.078 [2024-11-26 21:06:54.987750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.078 [2024-11-26 21:06:54.987764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.078 [2024-11-26 21:06:54.999978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.078 [2024-11-26 21:06:55.000387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.078 [2024-11-26 21:06:55.000416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.078 [2024-11-26 21:06:55.000431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.078 [2024-11-26 21:06:55.000667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.078 [2024-11-26 21:06:55.000911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.078 [2024-11-26 21:06:55.000936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.078 [2024-11-26 21:06:55.000950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.078 [2024-11-26 21:06:55.000964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.337 [2024-11-26 21:06:55.013628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.337 [2024-11-26 21:06:55.014071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.337 [2024-11-26 21:06:55.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.337 [2024-11-26 21:06:55.014118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.337 [2024-11-26 21:06:55.014359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.337 [2024-11-26 21:06:55.014578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.337 [2024-11-26 21:06:55.014598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.337 [2024-11-26 21:06:55.014611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.337 [2024-11-26 21:06:55.014623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.337 [2024-11-26 21:06:55.026955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.337 [2024-11-26 21:06:55.027407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.337 [2024-11-26 21:06:55.027435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.337 [2024-11-26 21:06:55.027451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.337 [2024-11-26 21:06:55.027698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.337 [2024-11-26 21:06:55.027926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.337 [2024-11-26 21:06:55.027958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.027972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.027986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.040200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.040574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.040602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.040619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.040858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.041099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.041123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.041136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.041148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.053506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.053965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.053996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.054018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.054272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.054466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.054488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.054500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.054511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.066766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.067120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.067148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.067163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.067379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.067588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.067609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.067623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.067635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.079936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.080391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.080419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.080446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.080683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.080915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.080938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.080951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.080964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.093205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.093618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.093647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.093662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.093929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.094158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.094184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.094198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.094210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.106495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.106911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.106958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.107210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.107420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.107442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.107455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.107467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.120107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.120504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.120534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.120550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.120805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.338 [2024-11-26 21:06:55.121072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.338 [2024-11-26 21:06:55.121093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.338 [2024-11-26 21:06:55.121106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.338 [2024-11-26 21:06:55.121118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.338 [2024-11-26 21:06:55.133442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.338 [2024-11-26 21:06:55.133826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.338 [2024-11-26 21:06:55.133856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.338 [2024-11-26 21:06:55.133873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.338 [2024-11-26 21:06:55.134101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.134312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.134333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.134346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.134363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.146813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.147219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.147262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.147479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.147715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.147746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.147759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.147772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.160203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.160653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.160669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.160905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.161123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.161145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.161158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.161171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.173817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.174258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.174287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.174304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.174545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.174786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.174808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.174821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.174834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.187156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.187490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.187518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.187542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.187767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.187966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.188002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.188014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.188027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.200511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.200910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.200939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.200955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.201203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.201398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.201418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.201431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.201443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.213838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.214228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.214255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.214271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.214506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.214758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.214781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.214795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.214809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.227107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.339 [2024-11-26 21:06:55.227542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.339 [2024-11-26 21:06:55.227571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.339 [2024-11-26 21:06:55.227592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.339 [2024-11-26 21:06:55.227844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.339 [2024-11-26 21:06:55.228073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.339 [2024-11-26 21:06:55.228094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.339 [2024-11-26 21:06:55.228107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.339 [2024-11-26 21:06:55.228119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.339 [2024-11-26 21:06:55.240418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.340 [2024-11-26 21:06:55.240810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.340 [2024-11-26 21:06:55.240840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.340 [2024-11-26 21:06:55.240856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.340 [2024-11-26 21:06:55.241079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.340 [2024-11-26 21:06:55.241295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.340 [2024-11-26 21:06:55.241316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.340 [2024-11-26 21:06:55.241328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.340 [2024-11-26 21:06:55.241341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.340 [2024-11-26 21:06:55.253636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.340 [2024-11-26 21:06:55.254067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.340 [2024-11-26 21:06:55.254096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.340 [2024-11-26 21:06:55.254112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.340 [2024-11-26 21:06:55.254347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.340 [2024-11-26 21:06:55.254542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.340 [2024-11-26 21:06:55.254562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.340 [2024-11-26 21:06:55.254574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.340 [2024-11-26 21:06:55.254586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.340 [2024-11-26 21:06:55.266844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.340 [2024-11-26 21:06:55.267257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.340 [2024-11-26 21:06:55.267286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.340 [2024-11-26 21:06:55.267302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.340 [2024-11-26 21:06:55.267541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.340 [2024-11-26 21:06:55.267781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.340 [2024-11-26 21:06:55.267809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.340 [2024-11-26 21:06:55.267823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.340 [2024-11-26 21:06:55.267837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.599 [2024-11-26 21:06:55.280452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.599 [2024-11-26 21:06:55.280905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.599 [2024-11-26 21:06:55.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.599 [2024-11-26 21:06:55.280951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.599 [2024-11-26 21:06:55.281190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.599 [2024-11-26 21:06:55.281384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.599 [2024-11-26 21:06:55.281404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.599 [2024-11-26 21:06:55.281416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.599 [2024-11-26 21:06:55.281429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.599 [2024-11-26 21:06:55.293753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.599 [2024-11-26 21:06:55.294117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.599 [2024-11-26 21:06:55.294145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.599 [2024-11-26 21:06:55.294161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.599 [2024-11-26 21:06:55.294395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.294605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.294626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.294639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.294651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.306994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.307337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.307365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.307381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.307598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.307844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.307866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.307879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.307895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.320313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.320683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.320738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.320755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.321008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.321204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.321225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.321238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.321251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.333472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.333962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.333979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.334248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.334443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.334465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.334477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.334490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.346747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.347183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.347212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.347229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.347468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.347677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.347722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.347737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.347749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.360089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.360492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.360521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.360538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.360764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.360998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.361019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.361033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.361046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.373693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.374099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.374129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.374146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.374404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.374600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.374621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.374634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.374647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.387066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.387441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.387470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.600 [2024-11-26 21:06:55.387486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.600 [2024-11-26 21:06:55.387732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.600 [2024-11-26 21:06:55.387959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.600 [2024-11-26 21:06:55.387981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.600 [2024-11-26 21:06:55.388011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.600 [2024-11-26 21:06:55.388025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.600 [2024-11-26 21:06:55.400422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.600 [2024-11-26 21:06:55.400798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.600 [2024-11-26 21:06:55.400828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.400850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.401091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.401300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.401321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.401333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.401346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.413672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.414095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.414125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.414141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.414378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.414590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.414611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.414624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.414636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.426853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.427261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.427290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.427305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.427526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.427764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.427788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.427802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.427815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.440034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.440386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.440414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.440430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.440650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.440893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.440919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.440934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.440946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.453951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.454378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.454411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.454429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.454668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.454922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.454949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.454965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.454982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.467806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.468249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.468281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.468299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.468538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.468793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.468820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.468835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.468850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.481678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.482075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.482101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.482116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.482325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.482582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.482607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.601 [2024-11-26 21:06:55.482623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.601 [2024-11-26 21:06:55.482644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.601 [2024-11-26 21:06:55.495714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.601 [2024-11-26 21:06:55.496134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.601 [2024-11-26 21:06:55.496167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.601 [2024-11-26 21:06:55.496185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.601 [2024-11-26 21:06:55.496423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.601 [2024-11-26 21:06:55.496666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.601 [2024-11-26 21:06:55.496704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.602 [2024-11-26 21:06:55.496724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.602 [2024-11-26 21:06:55.496740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.602 [2024-11-26 21:06:55.509564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.602 [2024-11-26 21:06:55.509961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.602 [2024-11-26 21:06:55.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.602 [2024-11-26 21:06:55.510010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.602 [2024-11-26 21:06:55.510248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.602 [2024-11-26 21:06:55.510490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.602 [2024-11-26 21:06:55.510516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.602 [2024-11-26 21:06:55.510532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.602 [2024-11-26 21:06:55.510548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.602 [2024-11-26 21:06:55.523398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.602 [2024-11-26 21:06:55.523822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.602 [2024-11-26 21:06:55.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.602 [2024-11-26 21:06:55.523873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.602 [2024-11-26 21:06:55.524112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.602 [2024-11-26 21:06:55.524354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.602 [2024-11-26 21:06:55.524380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.602 [2024-11-26 21:06:55.524397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.602 [2024-11-26 21:06:55.524412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.861 [2024-11-26 21:06:55.537273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.861 [2024-11-26 21:06:55.537698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.861 [2024-11-26 21:06:55.537732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.861 [2024-11-26 21:06:55.537750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.537989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.538232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.538258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.538273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.538288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.551127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.551554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.551587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.551605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.551854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.552098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.552124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.552141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.552156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.564999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.565473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.565506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.565524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.565773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.566016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.566042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.566059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.566074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.578910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.579434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.579488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.579506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.579764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.580008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.580033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.580050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.580065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.592899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.593361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.593394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.593412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.593650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.593907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.593934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.593951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.593966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.606812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.607326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.607379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.607397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.607634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.607892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.607919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.607934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.607949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.620823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.621267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.621285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.621523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.621779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.621811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.621828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.621844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.634699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.635125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.635158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.862 [2024-11-26 21:06:55.635176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.862 [2024-11-26 21:06:55.635414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.862 [2024-11-26 21:06:55.635656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.862 [2024-11-26 21:06:55.635682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.862 [2024-11-26 21:06:55.635713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.862 [2024-11-26 21:06:55.635729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.862 [2024-11-26 21:06:55.648554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.862 [2024-11-26 21:06:55.648982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.862 [2024-11-26 21:06:55.649016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.649035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.649274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.649517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.649542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.649558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.649573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.662415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.662838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.662871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.662890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.663130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.663376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.663401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.663418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.663439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.676279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.676696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.676730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.676748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.676987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.677231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.677257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.677273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.677288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.690126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.690545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.690577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.690595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.690847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.691091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.691117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.691133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.691149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.703986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.704364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.704396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.704414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.704652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.704909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.704936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.704953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.704968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.718018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.718435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.718467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.718486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.718739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.718982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.719008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.719024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.719039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.731896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.732306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.732338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.732357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.732595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.732852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.732878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.732894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.732909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.745747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.746164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.863 [2024-11-26 21:06:55.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.863 [2024-11-26 21:06:55.746216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.863 [2024-11-26 21:06:55.746455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.863 [2024-11-26 21:06:55.746713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.863 [2024-11-26 21:06:55.746741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.863 [2024-11-26 21:06:55.746757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.863 [2024-11-26 21:06:55.746772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.863 [2024-11-26 21:06:55.759604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.863 [2024-11-26 21:06:55.760001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.864 [2024-11-26 21:06:55.760034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.864 [2024-11-26 21:06:55.760053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.864 [2024-11-26 21:06:55.760299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.864 [2024-11-26 21:06:55.760543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.864 [2024-11-26 21:06:55.760569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.864 [2024-11-26 21:06:55.760586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.864 [2024-11-26 21:06:55.760601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.864 [2024-11-26 21:06:55.773447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.864 [2024-11-26 21:06:55.773877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.864 [2024-11-26 21:06:55.773910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.864 [2024-11-26 21:06:55.773928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.864 [2024-11-26 21:06:55.774166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.864 [2024-11-26 21:06:55.774409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.864 [2024-11-26 21:06:55.774434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.864 [2024-11-26 21:06:55.774450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.864 [2024-11-26 21:06:55.774466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.864 [2024-11-26 21:06:55.787313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.864 [2024-11-26 21:06:55.787787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.864 [2024-11-26 21:06:55.787820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:04.864 [2024-11-26 21:06:55.787839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:04.864 [2024-11-26 21:06:55.788078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:04.864 [2024-11-26 21:06:55.788321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.864 [2024-11-26 21:06:55.788346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.864 [2024-11-26 21:06:55.788362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.864 [2024-11-26 21:06:55.788377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.123 [2024-11-26 21:06:55.801217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.123 [2024-11-26 21:06:55.801599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.123 [2024-11-26 21:06:55.801632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.123 [2024-11-26 21:06:55.801650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.123 [2024-11-26 21:06:55.801901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.123 [2024-11-26 21:06:55.802145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.123 [2024-11-26 21:06:55.802177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.123 [2024-11-26 21:06:55.802193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.123 [2024-11-26 21:06:55.802209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.123 [2024-11-26 21:06:55.815248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.123 [2024-11-26 21:06:55.815660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.123 [2024-11-26 21:06:55.815701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.123 [2024-11-26 21:06:55.815722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.123 [2024-11-26 21:06:55.815960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.123 [2024-11-26 21:06:55.816203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.123 [2024-11-26 21:06:55.816229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.123 [2024-11-26 21:06:55.816246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.123 [2024-11-26 21:06:55.816261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.123 [2024-11-26 21:06:55.829104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.123 [2024-11-26 21:06:55.829594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.123 [2024-11-26 21:06:55.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.123 [2024-11-26 21:06:55.829662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.123 [2024-11-26 21:06:55.829919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.123 [2024-11-26 21:06:55.830172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.123 [2024-11-26 21:06:55.830199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.123 [2024-11-26 21:06:55.830215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.123 [2024-11-26 21:06:55.830231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.123 4241.40 IOPS, 16.57 MiB/s [2024-11-26T20:06:56.061Z] [2024-11-26 21:06:55.843111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.123 [2024-11-26 21:06:55.843628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.123 [2024-11-26 21:06:55.843681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.843712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.843959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.844204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.844230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.844247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.844270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.857127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.857595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.857627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.857645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.857894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.858140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.858165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.858181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.858195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.871031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.871543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.871595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.871612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.871885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.872130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.872155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.872171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.872186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.885106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.885633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.885668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.885698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.885957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.886210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.886237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.886260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.886282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.899106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.899617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.899652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.899671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.899929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.900181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.900209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.900225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.900241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.913016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.913439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.913471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.913490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.913743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.913987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.914012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.914028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.914042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.926899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.927377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.927426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.927445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.927682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.927940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.927965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.927980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.927995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.940851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.124 [2024-11-26 21:06:55.941299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.124 [2024-11-26 21:06:55.941330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.124 [2024-11-26 21:06:55.941354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.124 [2024-11-26 21:06:55.941594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.124 [2024-11-26 21:06:55.941850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.124 [2024-11-26 21:06:55.941876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.124 [2024-11-26 21:06:55.941892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.124 [2024-11-26 21:06:55.941907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.124 [2024-11-26 21:06:55.954838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:55.955289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:55.955342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:55.955367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:55.955612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:55.955874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:55.955901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:55.955926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:55.955944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:55.968989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:55.969524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:55.969584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:55.969605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:55.969871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:55.970148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:55.970174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:55.970191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:55.970206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:55.982874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:55.983298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:55.983331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:55.983349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:55.983587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:55.983850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:55.983881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:55.983897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:55.983912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:55.996771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:55.997196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:55.997229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:55.997248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:55.997486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:55.997743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:55.997769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:55.997784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:55.997798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:56.010638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:56.011029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:56.011062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:56.011081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:56.011319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:56.011562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:56.011587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:56.011603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:56.011619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:56.024486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:56.024907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:56.024940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:56.024958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:56.025196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:56.025439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:56.025465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:56.025481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:56.025502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:56.038366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:56.038758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:56.038791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:56.038809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:56.039049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:56.039291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:56.039317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:56.039333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:56.039348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.125 [2024-11-26 21:06:56.052406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.125 [2024-11-26 21:06:56.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.125 [2024-11-26 21:06:56.052862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.125 [2024-11-26 21:06:56.052881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.125 [2024-11-26 21:06:56.053119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.125 [2024-11-26 21:06:56.053362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.125 [2024-11-26 21:06:56.053388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.125 [2024-11-26 21:06:56.053404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.125 [2024-11-26 21:06:56.053419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.066287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.066716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.066750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.385 [2024-11-26 21:06:56.066768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.385 [2024-11-26 21:06:56.067007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.385 [2024-11-26 21:06:56.067252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.385 [2024-11-26 21:06:56.067279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.385 [2024-11-26 21:06:56.067295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.385 [2024-11-26 21:06:56.067310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.080140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.080570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.080602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.385 [2024-11-26 21:06:56.080620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.385 [2024-11-26 21:06:56.080872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.385 [2024-11-26 21:06:56.081116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.385 [2024-11-26 21:06:56.081141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.385 [2024-11-26 21:06:56.081158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.385 [2024-11-26 21:06:56.081173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.094014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.094423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.094456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.385 [2024-11-26 21:06:56.094474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.385 [2024-11-26 21:06:56.094727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.385 [2024-11-26 21:06:56.094970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.385 [2024-11-26 21:06:56.094996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.385 [2024-11-26 21:06:56.095012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.385 [2024-11-26 21:06:56.095029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.107865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.108282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.108314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.385 [2024-11-26 21:06:56.108332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.385 [2024-11-26 21:06:56.108571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.385 [2024-11-26 21:06:56.108828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.385 [2024-11-26 21:06:56.108855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.385 [2024-11-26 21:06:56.108871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.385 [2024-11-26 21:06:56.108886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.121720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.122137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.122170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.385 [2024-11-26 21:06:56.122194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.385 [2024-11-26 21:06:56.122433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.385 [2024-11-26 21:06:56.122677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.385 [2024-11-26 21:06:56.122714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.385 [2024-11-26 21:06:56.122731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.385 [2024-11-26 21:06:56.122746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.385 [2024-11-26 21:06:56.135679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.385 [2024-11-26 21:06:56.136100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.385 [2024-11-26 21:06:56.136132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.136150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.136388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.136630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.136656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.136672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.136699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.149542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.149939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.149971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.149989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.150227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.150469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.150495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.150510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.150526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.163379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.163795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.163828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.163847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.164085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.164334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.164360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.164376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.164391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.177235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.177647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.177680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.177710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.177950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.178194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.178219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.178234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.178249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.191094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.191522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.191554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.191572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.191826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.192070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.192096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.192112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.192127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.204967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.205350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.205383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.205401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.205639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.205898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.205925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.205941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.205963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.218794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.219303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.219356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.219374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.219612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.219871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.219899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.219915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.219931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.232783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.233311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.233365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.233383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.233621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.233878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.233904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.233920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.233935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.246774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.247186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.247220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.247239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.247478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.247737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.247764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.247780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.247796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.260671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.386 [2024-11-26 21:06:56.261143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.386 [2024-11-26 21:06:56.261161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.386 [2024-11-26 21:06:56.261400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.386 [2024-11-26 21:06:56.261644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.386 [2024-11-26 21:06:56.261670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.386 [2024-11-26 21:06:56.261699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.386 [2024-11-26 21:06:56.261718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.386 [2024-11-26 21:06:56.274557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.386 [2024-11-26 21:06:56.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.387 [2024-11-26 21:06:56.275011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.387 [2024-11-26 21:06:56.275030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.387 [2024-11-26 21:06:56.275269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.387 [2024-11-26 21:06:56.275515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.387 [2024-11-26 21:06:56.275540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.387 [2024-11-26 21:06:56.275557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.387 [2024-11-26 21:06:56.275573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.387 [2024-11-26 21:06:56.288419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.387 [2024-11-26 21:06:56.288823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.387 [2024-11-26 21:06:56.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.387 [2024-11-26 21:06:56.288874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.387 [2024-11-26 21:06:56.289112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.387 [2024-11-26 21:06:56.289356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.387 [2024-11-26 21:06:56.289381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.387 [2024-11-26 21:06:56.289397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.387 [2024-11-26 21:06:56.289413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.387 [2024-11-26 21:06:56.302289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.387 [2024-11-26 21:06:56.302699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.387 [2024-11-26 21:06:56.302734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.387 [2024-11-26 21:06:56.302758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.387 [2024-11-26 21:06:56.302998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.387 [2024-11-26 21:06:56.303242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.387 [2024-11-26 21:06:56.303266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.387 [2024-11-26 21:06:56.303282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.387 [2024-11-26 21:06:56.303296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.387 [2024-11-26 21:06:56.316137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.387 [2024-11-26 21:06:56.316531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.387 [2024-11-26 21:06:56.316571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.387 [2024-11-26 21:06:56.316589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.387 [2024-11-26 21:06:56.316837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.387 [2024-11-26 21:06:56.317081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.387 [2024-11-26 21:06:56.317106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.387 [2024-11-26 21:06:56.317122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.387 [2024-11-26 21:06:56.317136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 [2024-11-26 21:06:56.329972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.330390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.330422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.330440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.330678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.330933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.647 [2024-11-26 21:06:56.330958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.647 [2024-11-26 21:06:56.330974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.647 [2024-11-26 21:06:56.330988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4082008 Killed "${NVMF_APP[@]}" "$@" 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4082967 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4082967 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4082967 ']' 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.647 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.647 [2024-11-26 21:06:56.344138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.344578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.344612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.344631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.344886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.345132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.647 [2024-11-26 21:06:56.345157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.647 [2024-11-26 21:06:56.345174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.647 [2024-11-26 21:06:56.345189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 [2024-11-26 21:06:56.358089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.358512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.358545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.358563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.358815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.359060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.647 [2024-11-26 21:06:56.359085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.647 [2024-11-26 21:06:56.359101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.647 [2024-11-26 21:06:56.359116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 [2024-11-26 21:06:56.372006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.372413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.372444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.372462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.372719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.372962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.647 [2024-11-26 21:06:56.372986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.647 [2024-11-26 21:06:56.373002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.647 [2024-11-26 21:06:56.373016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 [2024-11-26 21:06:56.385935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.386334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.386367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.386385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.386624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.386878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.647 [2024-11-26 21:06:56.386903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.647 [2024-11-26 21:06:56.386919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.647 [2024-11-26 21:06:56.386933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.647 [2024-11-26 21:06:56.393660] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:26:05.647 [2024-11-26 21:06:56.393742] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.647 [2024-11-26 21:06:56.399981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.647 [2024-11-26 21:06:56.400394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.647 [2024-11-26 21:06:56.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.647 [2024-11-26 21:06:56.400443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.647 [2024-11-26 21:06:56.400680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.647 [2024-11-26 21:06:56.401111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.401135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.401151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.401165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.413825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.414241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.414273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.414291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.414535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.414789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.414814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.414829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.414843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.427697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.428113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.428163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.428400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.428643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.428666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.428681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.428707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.441573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.442008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.442040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.442058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.442295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.442538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.442562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.442577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.442590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.455363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.455741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.455770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.455786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.455999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.456238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.456263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.456276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.456288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.468892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.469327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.469356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.469372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.469613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.469854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.469877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.469891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.469904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.478794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:05.648 [2024-11-26 21:06:56.482841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.483209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.483238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.483270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.483531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.483776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.483798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.483813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.483827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.496808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.497491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.497534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.648 [2024-11-26 21:06:56.497556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.648 [2024-11-26 21:06:56.497836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.648 [2024-11-26 21:06:56.498077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.648 [2024-11-26 21:06:56.498098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.648 [2024-11-26 21:06:56.498115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.648 [2024-11-26 21:06:56.498141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.648 [2024-11-26 21:06:56.510827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.648 [2024-11-26 21:06:56.511303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.648 [2024-11-26 21:06:56.511332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.511348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.511590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.511827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.511848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.511862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.511874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.649 [2024-11-26 21:06:56.524508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.649 [2024-11-26 21:06:56.524904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.649 [2024-11-26 21:06:56.524933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.524950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.525192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.525399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.525419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.525431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.525444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.649 [2024-11-26 21:06:56.537869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.649 [2024-11-26 21:06:56.538271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.649 [2024-11-26 21:06:56.538299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.538315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.538557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.538791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.538812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.538826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.538838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.649 [2024-11-26 21:06:56.541333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.649 [2024-11-26 21:06:56.541370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.649 [2024-11-26 21:06:56.541399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.649 [2024-11-26 21:06:56.541410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.649 [2024-11-26 21:06:56.541420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.649 [2024-11-26 21:06:56.542837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.649 [2024-11-26 21:06:56.542901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.649 [2024-11-26 21:06:56.542905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.649 [2024-11-26 21:06:56.551548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.649 [2024-11-26 21:06:56.552122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.649 [2024-11-26 21:06:56.552163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.552184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.552407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.552632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.552655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.552672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.552694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.649 [2024-11-26 21:06:56.565201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.649 [2024-11-26 21:06:56.565779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.649 [2024-11-26 21:06:56.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.565842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.566066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.566292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.566315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.566332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.566347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.649 [2024-11-26 21:06:56.578819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.649 [2024-11-26 21:06:56.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.649 [2024-11-26 21:06:56.579398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.649 [2024-11-26 21:06:56.579420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.649 [2024-11-26 21:06:56.579645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.649 [2024-11-26 21:06:56.579879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.649 [2024-11-26 21:06:56.579918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.649 [2024-11-26 21:06:56.579936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.649 [2024-11-26 21:06:56.579951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.908 [2024-11-26 21:06:56.592511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.908 [2024-11-26 21:06:56.593069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.593108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.593129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.593352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.593577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.593599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.593617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.593633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.606267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.606794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.606832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.606852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.607075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.607299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.607321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.607339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.607355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.619962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.620588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.620610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.620843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.621068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.621091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.621107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.621134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.633606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.634033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.634063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.634079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.634294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.634515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.634536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.634549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.634562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.647295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.647675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.647711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.647728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.647942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.648160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.648182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.648196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.648208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.662027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.662509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.662550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.662577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.662862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.663136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.663167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.663190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.663211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.909 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:05.909 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.909 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.909 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.909 [2024-11-26 21:06:56.675906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.676275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.676306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.909 [2024-11-26 21:06:56.676323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.909 [2024-11-26 21:06:56.676538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.909 [2024-11-26 21:06:56.676773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.909 [2024-11-26 21:06:56.676796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.909 [2024-11-26 21:06:56.676809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.909 [2024-11-26 21:06:56.676823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.909 [2024-11-26 21:06:56.689452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.909 [2024-11-26 21:06:56.689854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.909 [2024-11-26 21:06:56.689884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.689900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.690114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.690333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.690355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.690368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.690382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 [2024-11-26 21:06:56.703013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.703373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.703402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.703418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.703632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.703861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.703907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.703922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.703935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 [2024-11-26 21:06:56.705257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 [2024-11-26 21:06:56.716590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.717001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.717033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.717050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.717267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.717488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.717509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.717524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.717537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 [2024-11-26 21:06:56.730072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.730471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.730501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.730518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.730741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.730962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.730984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.730997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.731025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 [2024-11-26 21:06:56.743608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.743973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.744002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.744018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.744232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.744464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.744486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.744500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.744513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 Malloc0 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 [2024-11-26 21:06:56.757155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.757639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.757672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.910 [2024-11-26 21:06:56.757702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.910 [2024-11-26 21:06:56.757925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.910 [2024-11-26 21:06:56.758148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.910 [2024-11-26 21:06:56.758170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.910 [2024-11-26 21:06:56.758187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.910 [2024-11-26 21:06:56.758202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.910 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 [2024-11-26 21:06:56.770861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.910 [2024-11-26 21:06:56.771230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.910 [2024-11-26 21:06:56.771259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9a50 with addr=10.0.0.2, port=4420 00:26:05.911 [2024-11-26 21:06:56.771275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9a50 is same with the state(6) to be set 00:26:05.911 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.911 [2024-11-26 21:06:56.771490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9a50 (9): Bad file descriptor 00:26:05.911 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.911 [2024-11-26 21:06:56.771718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in err 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.911 or state 00:26:05.911 [2024-11-26 21:06:56.771744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.911 [2024-11-26 21:06:56.771758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.911 [2024-11-26 21:06:56.771780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.911 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.911 [2024-11-26 21:06:56.775432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.911 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.911 21:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4082281 00:26:05.911 [2024-11-26 21:06:56.784405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.174 3534.50 IOPS, 13.81 MiB/s [2024-11-26T20:06:57.112Z] [2024-11-26 21:06:56.846544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:08.124 4213.29 IOPS, 16.46 MiB/s [2024-11-26T20:07:00.004Z] 4729.88 IOPS, 18.48 MiB/s [2024-11-26T20:07:00.938Z] 5145.11 IOPS, 20.10 MiB/s [2024-11-26T20:07:01.871Z] 5454.20 IOPS, 21.31 MiB/s [2024-11-26T20:07:03.244Z] 5734.00 IOPS, 22.40 MiB/s [2024-11-26T20:07:04.177Z] 5906.92 IOPS, 23.07 MiB/s [2024-11-26T20:07:05.111Z] 6105.69 IOPS, 23.85 MiB/s [2024-11-26T20:07:06.044Z] 6273.43 IOPS, 24.51 MiB/s [2024-11-26T20:07:06.044Z] 6416.93 IOPS, 25.07 MiB/s 00:26:15.106 Latency(us) 00:26:15.106 [2024-11-26T20:07:06.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.106 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:15.106 Verification LBA range: start 0x0 length 0x4000 00:26:15.106 Nvme1n1 : 15.01 6415.37 25.06 9001.20 0.00 8277.27 813.13 22816.24 00:26:15.106 [2024-11-26T20:07:06.044Z] =================================================================================================================== 00:26:15.106 [2024-11-26T20:07:06.044Z] Total : 6415.37 25.06 9001.20 0.00 8277.27 813.13 22816.24 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.363 rmmod nvme_tcp 00:26:15.363 rmmod nvme_fabrics 00:26:15.363 rmmod nvme_keyring 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4082967 ']' 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4082967 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 4082967 ']' 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 4082967 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4082967 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4082967' 00:26:15.363 killing process with pid 4082967 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 4082967 00:26:15.363 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 4082967 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.621 21:07:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:18.154 00:26:18.154 real 0m22.614s 00:26:18.154 user 1m0.537s 00:26:18.154 sys 0m4.124s 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:18.154 ************************************ 00:26:18.154 END TEST nvmf_bdevperf 00:26:18.154 ************************************ 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.154 ************************************ 00:26:18.154 START TEST nvmf_target_disconnect 00:26:18.154 ************************************ 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:18.154 * Looking for test storage... 00:26:18.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.154 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:18.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.155 --rc genhtml_branch_coverage=1 00:26:18.155 --rc genhtml_function_coverage=1 00:26:18.155 --rc genhtml_legend=1 00:26:18.155 --rc geninfo_all_blocks=1 00:26:18.155 --rc geninfo_unexecuted_blocks=1 00:26:18.155 00:26:18.155 ' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:18.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.155 --rc genhtml_branch_coverage=1 00:26:18.155 --rc genhtml_function_coverage=1 00:26:18.155 --rc genhtml_legend=1 00:26:18.155 --rc geninfo_all_blocks=1 00:26:18.155 --rc geninfo_unexecuted_blocks=1 00:26:18.155 00:26:18.155 ' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:18.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.155 --rc genhtml_branch_coverage=1 00:26:18.155 --rc genhtml_function_coverage=1 00:26:18.155 --rc genhtml_legend=1 00:26:18.155 --rc geninfo_all_blocks=1 00:26:18.155 --rc geninfo_unexecuted_blocks=1 00:26:18.155 00:26:18.155 ' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:18.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.155 --rc genhtml_branch_coverage=1 00:26:18.155 --rc genhtml_function_coverage=1 00:26:18.155 --rc genhtml_legend=1 00:26:18.155 --rc geninfo_all_blocks=1 00:26:18.155 --rc geninfo_unexecuted_blocks=1 00:26:18.155 00:26:18.155 ' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:18.155 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:18.156 21:07:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:20.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:20.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:20.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.059 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:20.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:26:20.060 00:26:20.060 --- 10.0.0.2 ping statistics --- 00:26:20.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.060 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:20.060 00:26:20.060 --- 10.0.0.1 ping statistics --- 00:26:20.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.060 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:20.060 ************************************ 00:26:20.060 START TEST nvmf_target_disconnect_tc1 00:26:20.060 ************************************ 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:20.060 21:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.320 [2024-11-26 21:07:11.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-11-26 21:07:11.034923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12f40 with addr=10.0.0.2, port=4420 00:26:20.320 [2024-11-26 21:07:11.034975] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:20.320 [2024-11-26 21:07:11.035002] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:20.320 [2024-11-26 21:07:11.035016] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:20.320 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:20.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:20.320 Initializing NVMe Controllers 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.320 00:26:20.320 real 0m0.099s 00:26:20.320 user 0m0.043s 00:26:20.320 sys 0m0.055s 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.320 ************************************ 00:26:20.320 END TEST nvmf_target_disconnect_tc1 00:26:20.320 ************************************ 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:20.320 ************************************ 00:26:20.320 START TEST nvmf_target_disconnect_tc2 00:26:20.320 ************************************ 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4086109 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4086109 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4086109 ']' 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.320 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.320 [2024-11-26 21:07:11.153879] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:26:20.320 [2024-11-26 21:07:11.153956] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.320 [2024-11-26 21:07:11.229376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.579 [2024-11-26 21:07:11.290439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.579 [2024-11-26 21:07:11.290498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.579 [2024-11-26 21:07:11.290525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.579 [2024-11-26 21:07:11.290535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.579 [2024-11-26 21:07:11.290544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.579 [2024-11-26 21:07:11.292140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:20.579 [2024-11-26 21:07:11.292203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:20.579 [2024-11-26 21:07:11.292308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:20.579 [2024-11-26 21:07:11.292317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 Malloc0 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 [2024-11-26 21:07:11.475656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 [2024-11-26 21:07:11.504007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.579 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.837 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4086254 00:26:20.837 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:20.837 21:07:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.747 21:07:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4086109 00:26:22.747 21:07:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 [2024-11-26 21:07:13.530529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 [2024-11-26 21:07:13.530884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Write completed with error (sct=0, sc=8) 00:26:22.747 starting I/O failed 00:26:22.747 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Write completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Write completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Read completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Write completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Write completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 Write completed with error (sct=0, sc=8) 00:26:22.748 starting I/O failed 00:26:22.748 [2024-11-26 21:07:13.531241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.748 [2024-11-26 21:07:13.531487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.531526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.531660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.531704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.531857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.531884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.532932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.532958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.533100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.533127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.533332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.533373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.533593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.533732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.533759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.533913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.533939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.534143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.534170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.534344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.534515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.534544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.534735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.534763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.534883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.534909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.535924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.535951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.536954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.536981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.537118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.537144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.537311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.537355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.748 [2024-11-26 21:07:13.537466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.748 [2024-11-26 21:07:13.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.748 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.537674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.537711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.537859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.537964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.537990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.538160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.538226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.538389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.538435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.538545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.538572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.538708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.538736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.538879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.538911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.539870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.539897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.540974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.541137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.541408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.541583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.541873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.541899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.542971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.542999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.543161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.543188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.543332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.543380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.543542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.543569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.543742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.543770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.749 [2024-11-26 21:07:13.543880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.749 [2024-11-26 21:07:13.543906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.749 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.544934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.544961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.545092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.545119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.545274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.545304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.545465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.545490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.545622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.545654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.545885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.545912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.546932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.546958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.547900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.547929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.548860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.548887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 [2024-11-26 21:07:13.549963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.750 [2024-11-26 21:07:13.549989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.750 qpair failed and we were unable to recover it. 00:26:22.750 Read completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Read completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Read completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Read completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Read completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Write completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.750 Write completed with error (sct=0, sc=8) 00:26:22.750 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Read completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 Write completed with error (sct=0, sc=8) 00:26:22.751 starting I/O failed 00:26:22.751 [2024-11-26 21:07:13.550300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:22.751 [2024-11-26 21:07:13.550442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.550472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.550604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.550631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.550768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.550795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.550912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.550938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.551852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.552840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.552978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.751 [2024-11-26 21:07:13.553965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.751 [2024-11-26 21:07:13.553992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.751 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.554966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.554992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.555884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.555911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.556943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.556969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.557158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.557184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.557319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.557344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.557511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.557539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.557764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.557793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.557911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.557936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.558971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.558997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.559103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.559286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.559340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.559476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.559502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.559664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.559698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.559836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.559862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.560018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.560061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.752 [2024-11-26 21:07:13.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.752 [2024-11-26 21:07:13.560288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.752 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.560393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.560420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.560592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.560619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.560759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.560786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.560893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.560919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.561887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.562955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.562981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.563181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.563341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.563534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.563697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.563832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.563994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.564845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.564978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.565168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.565371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.565510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.565668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.565848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.565874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.566023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.566052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.566192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.566235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.566371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.566397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.566514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.566540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.753 [2024-11-26 21:07:13.566699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.753 [2024-11-26 21:07:13.566727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.753 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.566862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.566889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.567810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.567851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.568870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.568982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.569140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.569356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.569533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.569696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.569864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.569890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.570856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.570970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.571131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.571329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.571521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.571693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.571855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.571881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.572206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.572375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.572530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.572722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.572888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.572987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.573014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.754 [2024-11-26 21:07:13.573173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.754 qpair failed and we were unable to recover it. 00:26:22.754 [2024-11-26 21:07:13.573407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.573466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.573624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.573650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.573771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.573799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.573941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.573968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.574837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.574863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.575868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.575894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.576814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.576857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.577883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.577909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.578862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.578888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.579022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.579046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.579183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.579209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.579336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.579362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.579515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-11-26 21:07:13.579541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.755 qpair failed and we were unable to recover it. 00:26:22.755 [2024-11-26 21:07:13.579676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.579708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.579811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.579953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.579978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.580935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.580962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.581147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.581280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.581442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.581659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.581840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.581977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.582988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.583099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.583125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.583338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.583367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.583615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.583644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.583779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.583806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.583969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.584166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.584406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.584558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.584708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.584872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.585083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.585277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-11-26 21:07:13.585303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.756 qpair failed and we were unable to recover it. 00:26:22.756 [2024-11-26 21:07:13.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.585462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.585577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.585603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.585713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.585740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.585905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.585931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.586115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.586500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.586649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.586822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.586973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.587122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.587366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.587583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.587937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.587963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.588919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.588945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.589954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.589993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.590158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.590206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.590370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.590533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.590559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.590708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.590736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.590864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.590909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.591031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.591059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.591332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.591382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.591488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.591515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.591649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.591675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.591811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.591854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.592009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.592052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.757 qpair failed and we were unable to recover it. 00:26:22.757 [2024-11-26 21:07:13.592183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-11-26 21:07:13.592223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.592371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.592397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.592557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.592694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.592719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.592840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.592866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.593856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.593882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.594875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.594985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.595943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.595968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.596932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.596958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.597879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.597907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.598127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.598253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.598300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.758 [2024-11-26 21:07:13.598435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.758 [2024-11-26 21:07:13.598462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.758 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.598628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.598656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.598777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.598804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.598944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.598970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.599937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.599980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.600188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.600216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.600405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.600431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.600592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.600618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.600760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.600786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.600906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.600932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.601885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.601912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.602071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.602100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.602287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.602313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.602417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.602443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.602603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.602630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.602804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.602843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.603056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.603283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.603499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.603639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.603821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.603967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.604195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.604390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.604568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.604729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.604918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.605045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.605071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.759 qpair failed and we were unable to recover it. 00:26:22.759 [2024-11-26 21:07:13.605171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.759 [2024-11-26 21:07:13.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.605328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.605354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.605486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.605511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.605646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.605672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.605831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.605875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.606075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.606106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.606260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.606288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.606460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.606488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.606640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.606666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.606843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.606888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.607997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.608132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.608533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.608753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.608920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.608947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.609109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.609135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.609367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.609396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.609599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.609759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.609786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.609893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.609919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.610940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.611078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.611104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.611211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.611237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.760 [2024-11-26 21:07:13.611393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.760 [2024-11-26 21:07:13.611423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.760 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.611590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.611616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.611733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.611760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.611911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.611949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.612963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.612994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.613173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.613212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.613380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.613408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.613571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.613597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.613762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.613788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.613915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.613941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.614077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.614102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.614253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.614311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.614456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.614484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.614643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.614669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.614831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.614870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.615951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.616945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.616974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.617190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.617327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.617354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.617511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.617540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.617692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.617740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.761 qpair failed and we were unable to recover it. 00:26:22.761 [2024-11-26 21:07:13.617882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-11-26 21:07:13.617908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.618866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.618892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.619854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.619974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.620155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.620331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.620505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.620708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.620878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.620904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.621928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.622092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.622276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.622458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.623439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.623780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.623937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.623984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.624141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.624189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.624414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-11-26 21:07:13.624457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.762 qpair failed and we were unable to recover it. 00:26:22.762 [2024-11-26 21:07:13.624592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.624618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.624743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.624774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.625122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.625151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.625326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.625355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.625530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.625755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.625783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.625954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.625980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.626942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.626971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.627942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.627970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.628202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.628251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.628401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.628430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.628665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.628701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.628855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.628882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.629030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.629057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.629243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.629305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.629508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.629555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.629680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.629855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.629882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.630034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.630090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.630222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.630266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.630410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.630439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.630549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.630575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.763 [2024-11-26 21:07:13.630715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-11-26 21:07:13.630742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.763 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.630870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.630900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.631082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.631261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.631501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.631666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.631808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.631978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.632159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.632524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.632709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.632900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.632927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.633053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.633082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.633293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.633349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.633484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.633509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.633649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.633674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.633821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.633846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.634056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.634229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.634467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.634670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.635935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.635963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.636107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.636136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.636256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.636283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.636427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-11-26 21:07:13.636456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.764 qpair failed and we were unable to recover it. 00:26:22.764 [2024-11-26 21:07:13.636606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.636635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.636804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.636830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.637908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.637938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.638183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.638233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.638387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.638431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.638529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.638555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.638695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.638723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.638860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.638886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.639077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.639336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.639481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.639624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.639795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.639956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.640221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.640383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.640553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.640730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.640912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.640956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.641098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.641141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.641276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.641319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.641479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.641505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.641643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.641669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.641849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.641893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.765 [2024-11-26 21:07:13.642051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.765 [2024-11-26 21:07:13.642108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.765 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.642230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.642260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.642412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.642443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.642589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.642619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.642802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.642829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.642930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.643890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.643917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.644909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.644935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.645929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.645973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.646099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.646142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.646269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.646317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.646470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.646513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.646649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.646675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.646852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.646898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.766 [2024-11-26 21:07:13.647853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.766 qpair failed and we were unable to recover it. 00:26:22.766 [2024-11-26 21:07:13.647958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.647984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.648921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.648947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.649933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.649961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.650109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.650137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.650286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.650315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.650467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.650499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.650683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.650716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.650852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.650878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.651953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.651998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.652151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.652195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.652333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.652386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.652576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.652726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.652771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.652931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.652961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.653178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.653312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.653489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.653665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.653834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.653995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.654054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.654224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.654250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.654362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-11-26 21:07:13.654388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.767 qpair failed and we were unable to recover it. 00:26:22.767 [2024-11-26 21:07:13.654486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.654513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.654653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.654679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.654845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.654871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.655836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.655862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.656957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.656983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.657940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.657966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.658922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.658949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.659905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.659932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.660066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.660093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.768 qpair failed and we were unable to recover it. 00:26:22.768 [2024-11-26 21:07:13.660213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-11-26 21:07:13.660239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.660382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.660408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.660524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.660551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.660690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.660716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.660854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.660880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.661903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.661933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.662947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.662976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.663173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.663216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.663314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.663340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.663489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.663528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.663670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.663710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.663866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.664074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.664320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.664573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.664727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.664857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.664986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.665027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.665241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.665295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.665498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.665523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.665697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.665726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.665868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.666032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.666058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.666194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.666220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.666438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.666491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.666638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.666667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.666843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.666883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.667084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.667142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.769 qpair failed and we were unable to recover it. 00:26:22.769 [2024-11-26 21:07:13.667313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.769 [2024-11-26 21:07:13.667340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.667528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.667677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.667710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.667845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.667871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.668923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.668950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.669853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.669988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.670191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.670349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.670499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.670693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.670857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.670882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.671874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.770 qpair failed and we were unable to recover it. 00:26:22.770 [2024-11-26 21:07:13.671985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.770 [2024-11-26 21:07:13.672011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.672172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.672326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.672496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.672655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.672864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.672991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.673020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.673151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.673177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:22.771 [2024-11-26 21:07:13.673282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.771 [2024-11-26 21:07:13.673308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:22.771 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.673445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.673471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.673575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.673600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.673733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.673760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.673901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.673927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.674867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.674989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.675921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.675950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.676094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.676316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.042 [2024-11-26 21:07:13.676467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.042 [2024-11-26 21:07:13.676495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.042 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.676642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.676671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.676842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.676868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.677106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.677343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.677552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.677679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.677824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.677987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.678197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.678478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.678650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.678810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.678950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.678994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.679180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.679239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.679424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.679573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.679601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.679752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.679778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.679916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.679943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.680946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.680972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.681122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.681150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.681319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.681348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.681497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.681528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.681703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.681747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.681896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.681922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.682908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.682935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.683088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.683117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.683260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.683288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.683456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.043 [2024-11-26 21:07:13.683484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.043 qpair failed and we were unable to recover it. 00:26:23.043 [2024-11-26 21:07:13.683594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.683623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.683784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.683810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.683920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.683946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.684971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.684996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.685159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.685202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.685326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.685355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.685576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.685602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.685738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.685764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.685897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.686051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.686079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.686248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.686277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.686417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.686445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.686573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x645f30 is same with the state(6) to be set 00:26:23.044 [2024-11-26 21:07:13.686792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.686832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.687928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.687956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.688143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.688204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.688328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.688356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.688509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.688538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.688710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.688748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.688889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.688917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.689899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.689927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.690101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.690145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.690294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.690338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.690482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.044 [2024-11-26 21:07:13.690508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.044 qpair failed and we were unable to recover it. 00:26:23.044 [2024-11-26 21:07:13.690641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.690668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.690837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.690880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.691817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.691993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.692206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.692418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.692764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.692971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.692999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.693138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.693340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.693508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.693719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.693872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.693994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.694198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.694389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.694559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.694765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.694900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.694925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.695162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.695191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.695413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.695463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.695580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.695609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.695798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.695824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.695959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.695985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.696142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.696170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.696335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.696393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.696548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.696584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.696746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.696773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.697009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.697038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.697231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.697281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.697437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.697462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.697573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.045 [2024-11-26 21:07:13.697599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.045 qpair failed and we were unable to recover it. 00:26:23.045 [2024-11-26 21:07:13.697737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.697763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.697901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.697926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.698065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.698093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.698290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.698345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.698472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.698502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.698667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.698719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.698906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.698936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.699089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.699120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.699377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.699429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.699626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.699665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.699810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.699837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.699966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.699996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.700186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.700213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.700317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.700343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.700452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.700478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.700655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.700706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.700837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.700876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.701039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.701071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.701304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.701356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.701548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.701604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.701795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.701957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.701989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.702177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.702345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.702524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.702698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.702880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.702988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.703160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.703373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.703628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.703803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.703961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.703986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.704101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.704126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.704267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.704293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.704457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.046 [2024-11-26 21:07:13.704486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.046 qpair failed and we were unable to recover it. 00:26:23.046 [2024-11-26 21:07:13.704644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.704670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.704835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.704966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.704991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.705142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.705171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.705343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.705373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.705509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.705534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.705674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.705709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.705871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.705897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.706047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.706075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.706295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.706350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.706497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.706525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.706675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.706713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.706906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.706945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.707110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.707137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.707309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.707365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.707527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.707553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.707697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.707724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.707893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.707919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.709878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.710173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.710334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.710508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.047 [2024-11-26 21:07:13.710697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.047 qpair failed and we were unable to recover it. 00:26:23.047 [2024-11-26 21:07:13.710826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.710852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.710955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.710981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.711946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.711985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.712145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.712176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.712326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.712355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.712530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.712561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.712710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.712754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.712925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.712952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.713089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.713116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.713259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.713285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.713445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.713471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.713639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.713815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.713841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.714871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.714979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.715958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.715990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.716915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.716942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.717088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.717144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.048 qpair failed and we were unable to recover it. 00:26:23.048 [2024-11-26 21:07:13.717347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.048 [2024-11-26 21:07:13.717376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.717501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.717528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.717665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.717699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.717834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.717861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.718874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.718901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.719888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.719915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.720895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.720922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.721852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.721879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.722062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.722239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.722444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.722618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.722809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.722975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.723138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.723361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.723544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.723716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.049 [2024-11-26 21:07:13.723857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.049 [2024-11-26 21:07:13.723884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.049 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.724885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.724913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.725067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.725111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.725290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.725347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.725519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.725569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.725673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.725713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.725847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.725892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.726078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.726126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.726311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.726354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.726489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.726516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.726635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.726674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.726847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.726878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.727040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.727067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.727228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.727255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.727440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.727469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.727648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.727681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.727863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.727910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.728961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.729102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.729129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.729312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.729341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.729501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.729545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.729699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.729882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.729907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.050 qpair failed and we were unable to recover it. 00:26:23.050 [2024-11-26 21:07:13.730853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.050 [2024-11-26 21:07:13.730879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.731159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.731361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.731529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.731710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.731916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.731942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.732116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.732145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.732290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.732319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.732454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.732498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.732693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.732738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.732845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.732872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.733879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.734111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.734269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.734444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.734648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.734828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.734992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.735180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.735408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.735556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.735714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.735852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.735878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.736037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.736192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.736351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.736505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.051 [2024-11-26 21:07:13.736648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.051 qpair failed and we were unable to recover it. 00:26:23.051 [2024-11-26 21:07:13.736793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.736820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.736998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.737205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.737375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.737550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.737737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.737902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.737928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.738946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.738990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.739144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.739189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.739376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.739420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.739588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.739614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.739771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.739802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.739927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.739954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.740146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.740354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.740555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.740729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.740756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.740887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.740916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.741089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.741266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.741644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.741826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.741978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.742182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.742227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.742377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.742596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.742622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.742761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.742805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.743003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.743192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.743249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.743416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.743464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.743638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.743667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-11-26 21:07:13.743843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.052 [2024-11-26 21:07:13.743882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.744034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.744093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.744272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.744301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.744472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.744533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.744702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.744750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.744964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.745146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.745190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.745385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.745435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.745568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.745594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.745758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.745785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.745888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.745914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.746898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.746928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.747115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.747282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.747475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.747874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.748858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.748986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.749164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.749343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.749543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.749762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.749915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.749958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.750122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.750148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.750293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.750322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.750469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.750496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.750657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.750684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-11-26 21:07:13.750863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.053 [2024-11-26 21:07:13.750890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.751075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.751104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.751388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.751522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.751548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.751675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.751710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.751882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.752852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.752999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.753200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.753454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.753628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.753818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.753962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.753991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.754311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.754463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.754824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.754985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.755181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.755359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.755736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.755864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.755890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.756089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.756287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.756501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.756658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.756812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.756967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.757011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.757188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.757236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.757393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.757437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.757550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.054 [2024-11-26 21:07:13.757577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-11-26 21:07:13.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.757765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.757940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.757968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.758952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.758996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.759154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.759197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.759386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.759429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.759589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.759614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.759773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.759827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.759960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.760329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.760537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.760664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.760864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.760908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.761885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.761910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.762834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.762983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.763175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.763350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.763490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.763649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.763847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.763896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.764026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.764068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.764231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.055 [2024-11-26 21:07:13.764256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.055 qpair failed and we were unable to recover it. 00:26:23.055 [2024-11-26 21:07:13.764391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.764519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.764545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.764709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.764736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.764893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.764936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.765162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.765322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.765462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.765646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.765844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.765999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.766197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.766374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.766533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.766911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.766958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.767159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.767314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.767449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.767615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.767813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.767966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.768933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.768958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.769126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.769279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.769442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.769598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.769976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.770018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.770183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.770226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.056 qpair failed and we were unable to recover it. 00:26:23.056 [2024-11-26 21:07:13.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.056 [2024-11-26 21:07:13.770389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.770526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.770552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.770676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.770708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.770836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.770886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.771987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.772119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.772253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.772429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.772614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.772804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.772848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.773940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.773982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.774955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.774984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.775150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.775192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.775325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.775468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.775495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.775658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.775684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.775845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.775888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.776115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.776306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.776495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.776652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.776822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.057 qpair failed and we were unable to recover it. 00:26:23.057 [2024-11-26 21:07:13.776997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.057 [2024-11-26 21:07:13.777042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.777238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.777282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.777390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.777415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.777530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.777556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.777699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.777724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.777852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.777899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.778877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.778906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.779135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.779292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.779451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.779626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.779780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.779978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.780184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.780393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.780556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.780760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.780970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.780997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.781203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.781365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.781502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.781666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.781816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.781977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.782128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.782294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.782452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.782643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.782801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.782845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.783028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.783072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.783226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.783455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.783596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.783621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.058 [2024-11-26 21:07:13.783801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.058 [2024-11-26 21:07:13.783846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.058 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.783968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.784160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.784361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.784521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.784708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.784865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.784908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.785841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.785870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.786947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.786972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.787860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.787994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.788909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.789137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.789302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.789464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.789621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.789801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.059 [2024-11-26 21:07:13.789984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.059 [2024-11-26 21:07:13.790028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.059 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.790180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.790209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.790357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.790382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.790495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.790630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.790655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.790840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.790883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.791828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.792869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.792895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.793882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.793984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.794854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.794990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.060 [2024-11-26 21:07:13.795015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.060 qpair failed and we were unable to recover it. 00:26:23.060 [2024-11-26 21:07:13.795147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.795323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.795509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.795644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.795808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.795974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.795999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.796162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.796321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.796449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.796601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.796779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.796960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.797148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.797328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.797458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.797789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.797839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.798822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.798850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.799895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.799921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.800852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.800973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.801018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.801134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.801320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.061 [2024-11-26 21:07:13.801345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.061 qpair failed and we were unable to recover it. 00:26:23.061 [2024-11-26 21:07:13.801450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.801476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.801609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.801634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.801793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.801841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.802952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.802977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.803899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.803942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.804867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.804896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.805845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.806056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.806181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.806307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.806465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.062 [2024-11-26 21:07:13.806605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.062 qpair failed and we were unable to recover it. 00:26:23.062 [2024-11-26 21:07:13.806739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.806765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.806896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.807931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.807957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.808948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.809930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.809973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.810964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.810992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.811115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.811143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.811294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.811323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.811469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.811497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.811627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.811654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.811835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.812865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.063 [2024-11-26 21:07:13.812891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.063 qpair failed and we were unable to recover it. 00:26:23.063 [2024-11-26 21:07:13.813070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.813099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.813226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.813464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.813493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.813674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.813852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.813880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.814909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.814935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.815871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.815896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.816925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.816969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.817178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.817339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.817504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.817635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.817838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.817971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.064 [2024-11-26 21:07:13.818877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.064 [2024-11-26 21:07:13.818902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.064 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.819920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.819946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.820171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.820205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.820352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.820382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.820509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.820537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.820662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.820694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.820827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.820853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.821004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.821032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.821158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.821639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.821668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.821848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.821887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.822920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.822946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.823968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.823993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.824124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.824166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.824274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.824300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.824454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.824482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.824624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.065 [2024-11-26 21:07:13.824653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.065 qpair failed and we were unable to recover it. 00:26:23.065 [2024-11-26 21:07:13.824821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.824847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.824978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.825142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.825362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.825545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.825700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.825889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.825914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.826060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.826117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.826264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.826442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.826471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.826603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.826628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.826740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.826767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.827881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.827906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.828905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.829925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.829951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.830878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.830904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.831054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.831082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.831307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.831335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.066 qpair failed and we were unable to recover it. 00:26:23.066 [2024-11-26 21:07:13.831482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.066 [2024-11-26 21:07:13.831510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.831643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.831668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.831778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.831821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.831943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.831973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.832907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.832933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.833899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.833925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.834155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.834184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.834307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.834350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.834491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.834534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.834676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.834709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.834843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.834868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.835866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.835892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.836004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.836029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.836183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.836211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.836430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.836458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.836639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.836668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.836905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.836935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.837072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.837098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.837232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.067 [2024-11-26 21:07:13.837257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.067 qpair failed and we were unable to recover it. 00:26:23.067 [2024-11-26 21:07:13.837393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.837421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.837605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.837825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.837851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.838837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.838863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.839954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.839997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.840919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.840944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.841859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.841992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.842824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.842980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.843005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.843159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.843188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.843359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.843388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.843509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.843537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.068 qpair failed and we were unable to recover it. 00:26:23.068 [2024-11-26 21:07:13.843663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.068 [2024-11-26 21:07:13.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.843829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.843855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.843979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.844155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.844293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.844449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.844661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.844831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.844857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.845907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.845933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.846909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.846935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.847956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.847981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.848114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.848139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.848272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.848301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.848440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.848468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.848648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.848873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.848902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.849836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.849861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.850022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.850052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.850179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.069 [2024-11-26 21:07:13.850204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.069 qpair failed and we were unable to recover it. 00:26:23.069 [2024-11-26 21:07:13.850311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.850336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.850469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.850498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.850650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.850675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.850861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.850890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.851878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.851903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.852867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.852892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.853833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.853859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.854913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.854941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.855050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.855077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.070 qpair failed and we were unable to recover it. 00:26:23.070 [2024-11-26 21:07:13.855211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.070 [2024-11-26 21:07:13.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.855375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.855401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.855500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.855526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.855626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.855652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.855795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.855822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.855973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.856130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.856155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.856313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.856502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.856546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.856710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.856875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.856919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.857931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.857957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.858870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.858897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.859836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.859877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.860800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.860826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.861001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.861026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.861162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.861188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.861337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.861365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.861511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.071 [2024-11-26 21:07:13.861536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-11-26 21:07:13.861666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.861700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.861808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.861834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.862882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.862991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.863972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.863997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.864912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.864938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.865915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.865941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.866932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.866957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.867086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.867111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.867222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.072 [2024-11-26 21:07:13.867247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-11-26 21:07:13.867390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.867416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.867521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.867546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.867672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.867707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.867813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.867839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.867986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.868881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.868907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.869943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.869968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.870886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.870913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.871930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.871956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.872086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.872111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.872224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.872249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.872360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.073 [2024-11-26 21:07:13.872386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.073 [2024-11-26 21:07:13.872492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.872517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.872649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.872675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.872787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.872951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.872976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.873898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.873924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.874911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.874936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.875966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.875991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.876880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.876992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.074 [2024-11-26 21:07:13.877944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.074 [2024-11-26 21:07:13.877970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.074 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.878901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.878926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.879957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.879982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.880925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.880951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.881889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.881998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.882845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.882870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.883925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.883950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.884082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.884109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.884213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.884239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.884397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.884422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.075 [2024-11-26 21:07:13.884521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.075 [2024-11-26 21:07:13.884546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.075 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.884665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.884721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.884862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.884888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.884997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.885910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.885936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.886898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.886923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.887990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.888830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.888856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.889951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.889977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.890955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.890981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.891882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.891907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.892858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.892883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.893017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.076 [2024-11-26 21:07:13.893043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.076 qpair failed and we were unable to recover it. 00:26:23.076 [2024-11-26 21:07:13.893161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.893187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.893313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.893338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.893472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.893498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.893679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.893850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.893875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.894860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.894977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.895877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.895981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.896873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.896903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.897886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.897912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.898854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.898996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.899136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.899274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.899449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.077 [2024-11-26 21:07:13.899783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.077 [2024-11-26 21:07:13.899810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.077 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.899955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.899981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.900873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.900975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.901966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.901992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.902856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.902883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.903962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.903989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.078 qpair failed and we were unable to recover it. 00:26:23.078 [2024-11-26 21:07:13.904852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.078 [2024-11-26 21:07:13.904877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.904977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.905913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.905939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.906854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.906987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.907894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.907920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.908920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.908945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.909869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.909895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.910989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.911095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.911121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.911286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.911312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.911443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.911468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.079 [2024-11-26 21:07:13.911609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.079 qpair failed and we were unable to recover it. 00:26:23.079 [2024-11-26 21:07:13.911722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.911748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.911850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.911875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.911993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.912837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.912999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.913916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.913942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.914877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.914902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.915869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.915896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.916841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.916975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.917927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.917952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.918092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.918228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.918253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.918358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.918385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.918488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.918513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.080 qpair failed and we were unable to recover it. 00:26:23.080 [2024-11-26 21:07:13.918645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.080 [2024-11-26 21:07:13.918671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.918854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.918966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.918995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.919872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.919984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.920892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.920918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.921965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.921990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.922944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.922969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.923858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.923884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.924899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.924924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.925079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.925105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.925211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.925237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.925370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.081 [2024-11-26 21:07:13.925395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.081 qpair failed and we were unable to recover it. 00:26:23.081 [2024-11-26 21:07:13.925551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.925577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.925692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.925718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.925832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.925868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.925965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.925991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.926903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.926929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.927941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.927966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.928850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.929956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.929984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.930889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.930915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.931022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.931047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.931160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.931310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.931336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.931468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.931494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.082 [2024-11-26 21:07:13.931599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.082 [2024-11-26 21:07:13.931624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.082 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.931740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.931766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.931897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.931923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.932942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.932969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.933882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.933909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.934929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.934956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.935117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.935287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.935482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.935865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.935976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.936846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.936981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.937927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.937953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.938084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.938110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.938276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.938301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.938430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.938456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.938609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.938641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.083 qpair failed and we were unable to recover it. 00:26:23.083 [2024-11-26 21:07:13.938757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.083 [2024-11-26 21:07:13.938784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.938897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.939831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.939858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.940969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.940997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.941896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.941922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.942846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.942872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.943863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.943889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.944881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.084 [2024-11-26 21:07:13.944906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.084 qpair failed and we were unable to recover it. 00:26:23.084 [2024-11-26 21:07:13.945019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.945859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.946888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.946912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.947933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.947958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.948922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.948948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.085 [2024-11-26 21:07:13.949839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.085 [2024-11-26 21:07:13.949864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.085 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.949995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.950915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.950939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.951965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.951990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.952931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.952957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.953900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.953925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.086 qpair failed and we were unable to recover it. 00:26:23.086 [2024-11-26 21:07:13.954754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.086 [2024-11-26 21:07:13.954780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.954914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.954942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.955918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.955942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.956840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.956994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.957899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.957926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.958927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.958952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.959114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.959140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.959265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.959291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.959420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-26 21:07:13.959446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.087 qpair failed and we were unable to recover it. 00:26:23.087 [2024-11-26 21:07:13.959607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.959634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.959752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.959890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.959916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.960929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.961874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.961987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.962905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.963042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.963067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.963176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.963201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.963308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.963332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.963441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.963466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.088 [2024-11-26 21:07:13.963578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-26 21:07:13.963602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.088 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.963712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.963737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.963843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.963867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.963968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.963993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.964948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.964973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.965857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.965881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.966005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.966030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.966141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-26 21:07:13.966166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.089 qpair failed and we were unable to recover it. 00:26:23.089 [2024-11-26 21:07:13.966285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.373777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.377703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.377979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.378010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.378238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.378266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.378409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.378454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.378616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.378804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.378846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.379058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.379312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.379457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.379633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.379833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.379979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.672 [2024-11-26 21:07:14.380993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.672 [2024-11-26 21:07:14.381020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.672 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.381248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.381279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.381456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.381486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.381648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.381676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.381799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.381827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.381938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.381981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.382168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.382280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.382307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.382523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.382709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.382737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.382850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.382877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.383048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.383283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.383476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.383652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.383847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.383966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.384185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.384352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.384568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.384753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.384934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.385909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.385936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.386930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.386958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.387094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.387282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.387308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.387467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.387494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.387621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.387648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.673 [2024-11-26 21:07:14.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.673 [2024-11-26 21:07:14.387824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.673 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.387962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.387989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.388094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.388122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.388304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.388334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.388474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.388501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.388681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.388719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.388869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.388896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.389961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.390203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.390228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.390378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.390421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.390572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.390601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.390740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.390767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.390887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.390918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.391103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.391283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.391514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.391665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.391857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.391993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.392154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.392320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.392559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.392725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.392892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.392919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.393952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.393978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.394112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.394328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.394466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.394592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.394619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.674 qpair failed and we were unable to recover it. 00:26:23.674 [2024-11-26 21:07:14.394818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.674 [2024-11-26 21:07:14.394845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.394986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.395957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.395983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.396112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.396138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.396324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.396353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.396508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.396534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.396644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.396670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.396849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.396878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.397930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.397959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.398117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.398148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.398328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.398358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.398536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.398565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.398721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.398860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.398887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.399843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.399885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.400894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.400939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.401096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.401123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.401255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.401282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.675 [2024-11-26 21:07:14.401443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.675 [2024-11-26 21:07:14.401487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.675 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.401609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.401639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.401778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.401805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.401966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.402956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.402998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.403137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.403166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.403294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.403322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.403531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.403723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.403750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.403890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.403916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.404960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.404987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.405953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.406897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.406923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.676 qpair failed and we were unable to recover it. 00:26:23.676 [2024-11-26 21:07:14.407946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.676 [2024-11-26 21:07:14.407974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.408152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.408181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.408328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.408357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.408540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.408566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.408728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.408755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.408915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.408941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.409101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.409290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.409475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.409656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.409992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.410196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.410359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.410523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.410723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.410886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.410913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.411856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.411883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.412883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.412910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.413012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.413039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.413157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.413184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.413346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.413373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.677 [2024-11-26 21:07:14.413662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.677 [2024-11-26 21:07:14.413695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.677 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.413803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.413845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.414942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.414969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.415915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.415943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.416156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.416314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.416495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.416648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.416811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.416966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.417184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.417355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.417530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.417744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.417932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.417959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.418074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.418102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.418293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.418499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.418526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.418691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.418724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.418876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.418905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.419895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.419922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.420084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.420111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.420334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.420397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.420536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.420565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.678 qpair failed and we were unable to recover it. 00:26:23.678 [2024-11-26 21:07:14.420736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.678 [2024-11-26 21:07:14.420763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.420916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.420945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.421930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.421961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.422179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.422359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.422514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.422678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.422977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.423957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.423984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.424164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.424193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.424345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.424372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.424484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.424672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.424723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.424906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.424933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.425119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.425281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.425506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.425671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.425820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.425988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.426130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.426290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.426607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.426855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.426881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.427018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.427159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.427185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.427292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.679 [2024-11-26 21:07:14.427319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.679 qpair failed and we were unable to recover it. 00:26:23.679 [2024-11-26 21:07:14.427476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.427503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.427631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.427676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.427826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.427854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.427967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.427994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.428893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.428920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.429954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.429981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.430936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.430962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.431124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.431367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.431394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.431529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.431556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.431741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.431772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.431933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.431959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.432097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.432260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.432453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.432660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.432822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.432956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.433860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.433998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.680 [2024-11-26 21:07:14.434025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.680 qpair failed and we were unable to recover it. 00:26:23.680 [2024-11-26 21:07:14.434151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.434177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.434305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.434331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.434482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.434511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.434717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.434760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.434921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.434947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.435962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.435991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.436970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.436997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.437158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.437184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.437310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.437336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.437468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.437631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.437660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.437852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.438899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.438925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.439087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.439116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.439291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.439320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.439444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.439471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.439604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.439631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.439826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.439855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.440002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.440029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.681 [2024-11-26 21:07:14.440188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.681 [2024-11-26 21:07:14.440233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.681 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.440379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.440408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.440533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.440560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.440669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.440703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.440868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.440894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.441843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.441869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.442907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.442934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.443946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.443974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.444136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.444162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.444299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.444326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.444536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.444563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.444747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.444777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.444931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.444958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.445839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.445870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.446018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.446045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.446185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.446212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.446345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.446372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.446531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.682 [2024-11-26 21:07:14.446558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.682 qpair failed and we were unable to recover it. 00:26:23.682 [2024-11-26 21:07:14.446715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.446745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.446915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.446945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.447154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.447540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.447712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.447975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.448112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.448303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.448495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.448780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.448940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.448984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.449158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.449348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.449374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.449510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.449555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.449712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.449742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.449871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.449898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.450109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.450285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.450472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.450657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.450833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.450991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.451969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.451995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.452127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.452170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.452356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.452383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.452543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.452569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.452726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.452757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.452911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.452940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.453126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.453153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.453256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.453283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.453396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.683 [2024-11-26 21:07:14.453423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.683 qpair failed and we were unable to recover it. 00:26:23.683 [2024-11-26 21:07:14.453630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.453817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.453844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.453954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.453981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.454122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.454148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.454325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.454354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.454550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.454576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.454715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.454743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.454909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.454935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.455085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.455114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.455297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.455324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.455453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.455497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.455670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.455708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.455840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.455868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.456870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.457989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.458164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.458193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.458375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.458401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.458562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.458589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.458749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.458792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.458925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.458952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.459937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.459964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.460097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.460124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.460259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.684 [2024-11-26 21:07:14.460285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.684 qpair failed and we were unable to recover it. 00:26:23.684 [2024-11-26 21:07:14.460477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.460504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.460678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.460724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.460852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.460882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.461943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.461969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.462941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.462984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.463135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.463161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.463300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.463343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.463491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.463520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.463704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.463731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.463839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.464925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.464951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.465133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.465290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.465455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.465640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.465852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.465979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.466382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.466544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.466704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.466864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.466891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.467049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.467076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.685 [2024-11-26 21:07:14.467245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.685 [2024-11-26 21:07:14.467271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.685 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.467426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.467536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.467680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.467714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.467850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.467877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.468942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.468969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.469883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.469909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.470930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.471115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.471158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.471338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.471364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.471531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.471558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.471721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.471748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.471861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.471887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.472864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.686 [2024-11-26 21:07:14.473092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.686 [2024-11-26 21:07:14.473119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.686 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.473284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.473310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.473450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.473646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.473673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.473823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.473850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.474068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.474256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.474450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.474667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.474848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.474983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.475172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.475372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.475506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.475703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.475887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.475914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.476093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.476122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.476277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.476306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.476490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.476517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.476876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.476909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.477972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.477998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.478150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.478340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.478663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.478832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.478992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.479124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.479290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.479479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.479721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.479870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.479896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.480028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.687 [2024-11-26 21:07:14.480055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.687 qpair failed and we were unable to recover it. 00:26:23.687 [2024-11-26 21:07:14.480158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.480434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.480598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.480768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.480968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.480997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.481157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.481184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.481289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.481474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.481505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.481703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.481730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.481858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.481884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.482969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.482999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.483955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.483982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.484974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.485976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.486084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.486114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.486273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.486300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.486407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.486434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.688 [2024-11-26 21:07:14.486556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.688 [2024-11-26 21:07:14.486586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.688 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.486747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.486774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.486873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.486900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.487095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.487301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.487506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.487639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.487833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.487974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.488161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.488349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.488515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.488675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.488847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.488873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.489860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.489887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.490879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.490906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.491935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.491961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.492876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.492902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.493058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.689 [2024-11-26 21:07:14.493247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.689 [2024-11-26 21:07:14.493277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.689 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.493430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.493456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.493596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.493638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.493799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.493828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.493989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.494144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.494339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.494544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.494748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.494938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.494979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.495140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.495167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.495305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.495349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.495487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.495516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.495665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.495707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.495857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.495885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.496871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.496898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.497873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.497900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.498926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.498953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.499122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.499152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.499336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.499363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.690 qpair failed and we were unable to recover it. 00:26:23.690 [2024-11-26 21:07:14.499500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.690 [2024-11-26 21:07:14.499527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.499657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.499684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.499898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.500245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.500425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.500635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.500804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.500985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.501115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.501298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.501484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.501666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.501842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.501871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.502913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.502939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.503935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.503961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.504093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.504121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.504299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.504329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.504502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.504532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.504703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.504731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.504857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.504884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.505020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.505047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.505241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.505272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.505425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.505455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.505609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.505653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.505840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.505882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.506052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.506080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.506220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.506246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.506383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.691 [2024-11-26 21:07:14.506426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.691 qpair failed and we were unable to recover it. 00:26:23.691 [2024-11-26 21:07:14.506581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.506626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.506761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.506789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.506926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.506952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.507115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.507142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.507275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.507473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.507540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.507704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.507731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.507875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.507903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.508890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.508917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.509941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.509967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.510160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.510296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.510337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.510479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.510508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.510709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.510845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.511871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.512090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.512264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.512464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.512645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.512858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.512999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.513029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.513186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.513232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.692 [2024-11-26 21:07:14.513352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.692 [2024-11-26 21:07:14.513396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.692 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.513587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.513614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.513742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.513770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.513912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.514101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.514265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.514452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.514654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.514845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.514996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.515170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.515490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.515675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.515839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.515865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.516005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.516051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.516266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.516293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.516472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.516517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.516646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.516673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.516832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.516878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.517035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.517079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.517291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.517339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.517465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.517495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.517647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.517677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.517884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.517913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.518124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.518154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.518298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.518586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.518638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.518832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.518859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.519931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.519958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.520112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.520141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.520277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.520322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.520502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.520531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.520681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.693 [2024-11-26 21:07:14.520734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.693 qpair failed and we were unable to recover it. 00:26:23.693 [2024-11-26 21:07:14.520851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.520878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.521885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.521912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.522084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.522110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.522288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.522317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.522545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.522574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.522765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.522895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.522926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.523074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.523104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.523437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.523559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.523790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.523831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.523969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.523997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.524163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.524207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.524400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.524444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.524553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.524580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.524729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.524758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.524894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.525861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.526867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.526897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.527041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.527071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.527244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.527273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.527426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.527472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.527637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.527664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.694 [2024-11-26 21:07:14.527787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.694 [2024-11-26 21:07:14.527821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.694 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.528949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.528993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.529203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.529358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.529401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.529506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.529533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.529669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.529704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.529859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.529903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.530085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.530129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.530308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.530356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.530517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.530703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.530730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.530891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.530932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.531107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.531149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.531309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.531342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.531486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.531517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.531666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.531700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.531855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.531885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.532957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.532999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.533118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.533149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.533324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.533369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.533503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.533662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.533695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.533823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.533870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.534019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.534064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.534225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.695 [2024-11-26 21:07:14.534270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.695 qpair failed and we were unable to recover it. 00:26:23.695 [2024-11-26 21:07:14.534432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.534459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.534569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.534596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.534717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.534745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.534908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.534935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.535090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.535301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.535460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.535622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.535820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.535974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.536136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.536343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.536476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.536664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.536820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.536866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.537912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.537942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.538120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.538150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.538300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.538329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.538466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.538509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.538708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.538736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.538898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.538928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.539174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.539225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.539348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.539440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.539571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.539598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.539758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.539790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.539964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.539994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.540160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.540314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.540343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.540577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.540623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.540757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.540787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.540939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.540969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.541149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.541213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.541340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.696 [2024-11-26 21:07:14.541369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.696 qpair failed and we were unable to recover it. 00:26:23.696 [2024-11-26 21:07:14.541539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.541569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.541693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.541726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.541880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.541907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.542070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.542100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.542332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.542362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.542519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.542550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.542735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.542765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.542942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.542975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.543136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.543165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.543309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.543338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.543503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.543532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.543710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.543753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.543916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.544897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.544924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.545907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.545933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.546939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.546982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.547155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.547184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.547333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.547362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.547512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.547543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.547708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.547735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.547902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.547928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.548181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.548235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.548379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.548408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.548574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.697 [2024-11-26 21:07:14.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.697 qpair failed and we were unable to recover it. 00:26:23.697 [2024-11-26 21:07:14.548740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.548768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.548907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.548935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.549968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.549994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.550133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.550159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.550313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.550346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.550547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.550577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.550729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.550927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.551948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.551991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.552144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.552170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.552531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.552572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.552757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.552784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.552954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.552998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.553172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.553378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.553544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.553716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.553882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.553989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.554181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.554347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.554526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.554707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.554898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.555031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.555076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.555204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.555253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.555435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.555464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.698 [2024-11-26 21:07:14.555652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.698 [2024-11-26 21:07:14.555678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.698 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.555790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.555817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.555986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.556044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.556211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.556258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.556416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.556623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.556650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.556767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.556983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.557151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.557353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.557943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.558139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.558182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.558296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.558323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.558452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.558479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.558665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.558855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.558896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.559814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.559842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.560027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.560073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.560229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.560277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.560403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.560433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.560635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.560769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.560819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.561927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.699 [2024-11-26 21:07:14.561972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.699 qpair failed and we were unable to recover it. 00:26:23.699 [2024-11-26 21:07:14.562171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.562215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.562369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.562414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.562555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.562582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.562708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.562754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.562934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.563136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.563165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.563344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.563390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.563521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.563548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.563674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.563708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.563862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.563909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.564065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.564109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.564263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.564293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.564448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.564611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.564638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.564796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.564841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.565130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.565179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.565450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.565616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.565643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.565809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.565855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.566914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.566941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.567895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.567922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-11-26 21:07:14.568812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-11-26 21:07:14.568923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.568950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.569059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.569085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.569210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.569240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.569440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.569484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.569623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.569652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.569824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.569869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.570065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.570110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.570266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.570310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.570492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.570522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.570679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.570714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.570854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.570885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.571031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.571061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.571182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.571211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.571428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.571489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.571660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.571695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.571832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.572009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.572243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.572388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.572418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.572615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.572661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.572840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.572869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.573060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.573289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.573342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.573514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.573544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.573701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.573729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.573859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.573885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.574017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.574047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.574223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.574252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.574519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.574568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.574753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.574780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.574914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.574941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.575153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.575335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.575524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.575728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.575881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.575984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.576011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.576150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.576193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-11-26 21:07:14.576364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-11-26 21:07:14.576393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.576544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.576574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.576725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.576767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.576895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.576921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.577955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.577982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.578157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.578653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.578838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.578991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.579225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.579375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.579522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.579739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.579892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.579919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.580908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.580934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.581122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.581295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.581324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.581492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.581641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.581670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.581854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.581880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.582034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.582063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.582211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.582241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.582457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.582487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.582669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.582709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.582868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.582895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.583028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.583234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-11-26 21:07:14.583293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-11-26 21:07:14.583410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.583440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.583597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.583788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.583815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.583955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.583998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.584179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.584205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.584385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.584415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.584529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.584558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.584731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.584892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.584919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.585937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.585964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.586111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.586140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.586300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.586328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.586505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.586535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.586662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.586699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.586879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.586906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.587928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.588918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.588945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.589135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.589333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.589681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.589850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.589974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.590005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.590112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-11-26 21:07:14.590138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-11-26 21:07:14.590240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.590266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.590424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.590451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.590609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.590651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.590803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.590909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.590937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.591855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.591882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.592015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.592042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.592193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.592222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.592347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.592374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-11-26 21:07:14.592511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-11-26 21:07:14.592537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.592701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.592732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.592892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.592920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.593987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.594908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.595943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.595970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.596104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.596149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.596308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.596337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.596507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.596536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.596714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.596758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.596871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.597858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.597888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.598095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.598303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.598474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.598625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.991 [2024-11-26 21:07:14.598766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.991 qpair failed and we were unable to recover it. 00:26:23.991 [2024-11-26 21:07:14.598871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.598899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.599914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.599941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.600893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.600920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.601874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.601996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.602191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.602394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.602571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.602883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.602909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.603954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.604896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.605061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.992 qpair failed and we were unable to recover it. 00:26:23.992 [2024-11-26 21:07:14.605220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.992 [2024-11-26 21:07:14.605247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.605377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.605404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.605567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.605593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.605737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.605764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.605875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.605901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.606079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.606262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.606429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.606585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.606801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.606988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.607173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.607307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.607459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.607643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.607815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.607843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.608963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.608991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.609112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.609143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.609323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.609367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.609531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.609563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.609758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.609920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.609947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.610152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.610208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.610358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.610388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.610515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.610544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.610668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.610718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.610831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.610858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.611924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.993 [2024-11-26 21:07:14.611950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.993 qpair failed and we were unable to recover it. 00:26:23.993 [2024-11-26 21:07:14.612198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.612253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.612414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.612460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.612623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.612651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.612797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.612825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.612994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.613039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.613275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.613327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.613587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.613713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.613742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.613897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.613942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.614134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.614188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.614349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.614394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.614529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.614556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.614664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.614709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.614867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.614911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.615047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.615091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.615273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.615318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.615484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.615511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.615620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.615647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.616489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.616521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.616668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.616707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.616869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.616919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.617057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.617088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.617302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.617347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.617490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.617518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.617655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.617701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.617860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.617906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.618138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.618321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.618700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.618855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.619164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.619385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.619611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.619943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.619987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.620111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.994 [2024-11-26 21:07:14.620140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.994 qpair failed and we were unable to recover it. 00:26:23.994 [2024-11-26 21:07:14.620419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.620475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.620617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.620811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.620838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.620951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.620978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.621155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.621292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.621469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.621673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.621842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.621997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.622833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.622976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.623035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.623199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.623246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.623620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.623647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.623826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.623871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.624082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.624287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.624480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.624674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.624840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.995 [2024-11-26 21:07:14.625876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.995 qpair failed and we were unable to recover it. 00:26:23.995 [2024-11-26 21:07:14.625992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.626880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.626986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.627108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.627300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.627451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.627625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.627836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.627884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.628868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.628895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.629942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.629972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.630145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.630175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.630298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.630327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.630539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.630658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.630707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.630863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.630889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.631861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.631887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.632028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.632055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.632165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.996 [2024-11-26 21:07:14.632219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.996 qpair failed and we were unable to recover it. 00:26:23.996 [2024-11-26 21:07:14.632353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.632398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.632555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.632587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.632748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.632777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.632883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.632911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.633878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.633905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.634929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.634955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.635927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.635954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.636154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.636322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.636495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.636673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.636851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.636967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.637890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.637996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.638165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.638354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.638498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.638674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.997 [2024-11-26 21:07:14.638847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.997 [2024-11-26 21:07:14.638873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.997 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.639896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.639941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.640101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.640153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.640310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.640355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.640492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.640520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.640660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.640708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.640844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.640871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.641913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.641940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.642985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.643172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.643363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.643504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.643533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.643696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.643741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.643859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.643886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.644856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.644985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.645027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.645209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.645238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.645387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.645416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.645535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.998 [2024-11-26 21:07:14.645564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.998 qpair failed and we were unable to recover it. 00:26:23.998 [2024-11-26 21:07:14.645698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.645743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.645853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.645880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.646005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.646031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.646221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.646266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.646463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.646492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.646644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.646683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.646834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.646864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.647877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.647907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.648875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.648902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.649962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.649988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.650175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.650204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.650372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.650401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.650550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.650577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.650727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.650754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.650873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.650899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.651871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.651897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.652050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.652083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:23.999 [2024-11-26 21:07:14.652311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-11-26 21:07:14.652341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:23.999 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.652494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.652526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.652672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.652719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.652875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.652905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.653843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.653870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.654828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.654855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.655853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.655879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.656873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.656900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.657897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.000 [2024-11-26 21:07:14.657924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.000 qpair failed and we were unable to recover it. 00:26:24.000 [2024-11-26 21:07:14.658726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.658772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.658919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.659322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.659512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.659678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.659853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.659972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.660209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.660408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.660582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.660854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.660881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.661869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.661895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.662011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.662037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.662223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.662281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.662423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.662452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.662612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.662641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.662814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.662854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.663943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.663985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.664193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.664223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.664345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.664375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.664489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.664527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.664722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.664750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.664888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.664915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.665067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.665109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.665306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.665358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.665508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.665537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.001 qpair failed and we were unable to recover it. 00:26:24.001 [2024-11-26 21:07:14.665682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.001 [2024-11-26 21:07:14.665737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.665849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.665877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.666950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.666977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.667750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.667781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.667901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.667927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.668153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.668350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.668524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.668725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.668850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.668963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.669184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.669399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.669610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.669807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.669956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.669993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.670153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.670326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.670451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.670668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.670873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.670986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.671198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.671364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.671556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.671716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.671890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.671916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.672944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.672970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.002 [2024-11-26 21:07:14.673073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.002 [2024-11-26 21:07:14.673100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.002 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.673275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.673305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.673462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.673488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.673595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.673622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.673803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.673830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.673971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.674900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.674927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.675111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.675149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.675275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.675318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.675437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.675466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.675646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.675675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.675818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.675844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.676900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.677968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.677999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.678199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.678228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.678371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.678398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.678510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.678537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.679357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.679391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.679538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.679565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.679682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.679716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.679833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.679859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.679965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.679995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.680107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.680134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.003 [2024-11-26 21:07:14.680298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.003 [2024-11-26 21:07:14.680327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.003 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.680480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.680507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.680643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.680708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.680835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.680862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.681960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.681998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.682175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.682205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.682336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.682370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.682494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.682538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.682739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.682766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.682873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.682900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.683869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.683972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.684951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.684980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.685122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.685314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.685509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.004 [2024-11-26 21:07:14.685880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.004 qpair failed and we were unable to recover it. 00:26:24.004 [2024-11-26 21:07:14.685988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.686836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.686978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.687181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.687428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.687609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.687782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.687926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.687953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.688101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.688130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.688281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.688322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.688487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.688514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.688621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.688665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.688737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x645f30 (9): Bad file descriptor 00:26:24.005 [2024-11-26 21:07:14.688877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.688918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.689874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.689903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.690813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.690841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.691877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.691906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.005 [2024-11-26 21:07:14.692080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.005 [2024-11-26 21:07:14.692110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.005 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.692305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.692464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.692492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.692647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.692677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.692817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.692844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.692989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.693019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.693179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.693215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.693400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.693432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.693623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.693653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.693801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.693828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.693971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.694828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.694973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.695332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.695544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.695711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.695863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.695890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.696883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.696910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.697871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.698043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.698070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.698204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.698233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.698384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.698415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.698607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.698639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.006 qpair failed and we were unable to recover it. 00:26:24.006 [2024-11-26 21:07:14.698809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.006 [2024-11-26 21:07:14.698850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.698999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.699232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.699435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.699608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.699765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.699944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.699989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.700154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.700367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.700535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.700981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.701229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.701426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.701565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.701742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.701922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.701957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.702075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.702104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.702243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.702270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.702458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.702521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.702675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.702736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.702892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.702922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.703957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.703984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.704139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.704169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.704314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.704350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.704500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.704532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.704715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.704743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.704883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.704910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.705058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.705092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.705276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.705319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.705486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.705513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.007 [2024-11-26 21:07:14.705716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.007 [2024-11-26 21:07:14.705762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.007 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.705908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.706930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.706959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.707136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.707180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.707314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.707358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.707513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.707559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.707691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.707751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.707903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.707935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.708071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.708100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.708303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.708333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.708487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.708543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.708664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.708706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.708854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.709061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.709286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.709488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.709669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.709845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.709979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.710154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.710497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.710747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.710889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.711115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.711293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.711481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.711645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.711833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.711990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.712033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.712226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.712273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.712463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.712510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.008 [2024-11-26 21:07:14.712701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.008 [2024-11-26 21:07:14.712746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.008 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.712860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.712886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.713934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.713961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.714118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.714148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.714287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.714316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.714530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.714738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.714766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.714881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.714907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.715888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.715916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.716940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.716967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.717130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.717157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.717322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.717351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.717496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.717526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.717729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.717842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.717869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.718006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.718033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.718149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.718176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.718354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.718384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.009 qpair failed and we were unable to recover it. 00:26:24.009 [2024-11-26 21:07:14.718503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.009 [2024-11-26 21:07:14.718532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.718740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.718874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.718901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.719884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.719910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.720096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.720126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.720305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.720331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.720500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.720530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.720726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.720877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.720904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.721864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.721989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.722174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.722345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.722703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.722873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.722900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.723938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.723965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.724112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.724353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.724387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.724539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.724568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.724738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.724779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.724957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.725102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.010 qpair failed and we were unable to recover it. 00:26:24.010 [2024-11-26 21:07:14.725322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.010 [2024-11-26 21:07:14.725367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.725518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.725545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.725694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.725830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.725857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.726848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.726877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.727042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.727080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.727213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.727242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.727392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.727440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.727620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.727651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.727823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.727869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.728890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.728922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.729380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.729558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.729725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.729859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.729994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.730197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.730369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.730553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.730747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.730889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.730916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.731939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.011 [2024-11-26 21:07:14.731966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.011 qpair failed and we were unable to recover it. 00:26:24.011 [2024-11-26 21:07:14.732102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.732272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.732434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.732631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.732776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.732944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.732988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.733137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.733166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.733288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.733332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.733481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.733510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.733656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.733697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.733859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.733886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.734932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.734958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.735928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.735955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.736920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.736946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.737939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.737965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.738078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.738104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.738263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.738426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.012 [2024-11-26 21:07:14.738471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.012 qpair failed and we were unable to recover it. 00:26:24.012 [2024-11-26 21:07:14.738597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.738627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.738782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.738810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.738958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.738987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.739933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.739960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.740933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.740963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.741931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.741972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.742119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.742167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.742293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.742324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.742479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.742506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.742658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.742854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.742898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.743079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.743300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.743524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.743660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.743838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.743987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.744153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.744303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.744670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.744865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.744895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.013 [2024-11-26 21:07:14.745070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.013 [2024-11-26 21:07:14.745114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.013 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.745269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.745313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.745427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.745459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.745591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.745617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.745752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.745783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.745937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.745982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.746186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.746328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.746372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.746485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.746512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.746672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.746708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.746832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.746880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.747069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.747271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.747454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.747597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.747970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.748161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.748326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.748462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.748624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.748819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.748863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.749071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.749278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.749454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.749590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.749771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.749956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.750008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.750175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.750220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.750407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.750446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.750590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.750618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.750768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.750800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.750999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.751030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.751173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.014 [2024-11-26 21:07:14.751203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.014 qpair failed and we were unable to recover it. 00:26:24.014 [2024-11-26 21:07:14.751354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.751383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.751546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.751577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.751750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.751778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.751905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.751932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.752163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.752364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.752517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.752717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.752989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.753141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.753339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.753511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.753683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.753827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.753854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.754908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.754935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.755098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.755127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.755337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.755367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.755471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.755500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.755674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.755713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.755855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.755882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.756015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.756059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.756228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.756278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.756466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.756493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.756655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.756859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.756887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.015 [2024-11-26 21:07:14.757854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.015 [2024-11-26 21:07:14.757880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.015 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.757992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.758205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.758349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.758506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.758737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.758891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.758918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.759100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.759259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.759436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.759637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.759822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.759957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.760877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.760989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.761361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.761510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.761700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.761841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.761868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.762029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.762073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.762290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.762321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.762471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.762502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.762680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.762732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.762870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.762896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.763018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.763063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.763258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.763306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.763487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.763540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.763663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.763706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.763883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.763911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.764045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.764072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.764239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.764268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.764493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.764527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.764674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.764707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.016 [2024-11-26 21:07:14.764840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.016 [2024-11-26 21:07:14.764867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.016 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.764996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.765167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.765511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.765740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.765905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.765932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.766076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.766103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.766335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.766383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.766530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.766559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.766736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.766765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.766875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.766902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.767936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.767963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.768910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.768937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.769920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.769947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.770901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.770927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.771078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.771272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.017 [2024-11-26 21:07:14.771298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.017 qpair failed and we were unable to recover it. 00:26:24.017 [2024-11-26 21:07:14.771420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.771446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.771557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.771584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.771729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.771772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.771930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.771956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.772154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.772184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.772312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.772342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.772480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.772506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.772679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.772716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.772847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.772874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.773860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.773994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.774850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.774987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.775942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.775968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.776105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.776131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.776293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.776333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.776505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.776535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.776692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.776722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.776858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.776885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.018 [2024-11-26 21:07:14.777008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.018 [2024-11-26 21:07:14.777034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.018 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.777166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.777192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.777370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.777499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.777527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.777683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.777722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.777854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.777884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.778928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.778955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.779954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.779983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.780943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.780969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.781144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.781295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.781472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.781650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.781842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.781991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.782850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.782984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.783014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.783164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.783190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.783329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.019 [2024-11-26 21:07:14.783355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.019 qpair failed and we were unable to recover it. 00:26:24.019 [2024-11-26 21:07:14.783508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.783537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.783654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.783853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.783879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.784882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.784909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.785940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.785970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.786897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.786941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.787873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.787899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.788844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.788871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.020 qpair failed and we were unable to recover it. 00:26:24.020 [2024-11-26 21:07:14.789829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.020 [2024-11-26 21:07:14.789857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.789963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.790170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.790309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.790507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.790674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.790866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.790892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.791074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.791263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.791396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.791605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.791821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.792141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.792321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.792493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.792680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.792885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.792911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.793940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.793967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.794918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.794944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.795103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.795145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.795301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.795327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.795491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.795518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.795624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.795668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.796017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.796044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.796147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.796174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.796283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.796309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.021 [2024-11-26 21:07:14.796447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.021 [2024-11-26 21:07:14.796474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.021 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.796632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.796676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.796843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.797897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.797923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.798889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.798919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.799923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.799954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.800924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.800950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.801876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.802065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.802092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.802230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.802273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.802427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.802457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.802635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.802677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.802831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.802857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.022 [2024-11-26 21:07:14.803011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.022 [2024-11-26 21:07:14.803041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.022 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.803186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.803212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.803335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.803362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.803556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.803582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.803720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.803747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.803856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.803882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.804875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.804903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.805994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.806851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.806991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.807522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.807857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.807886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.808047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.808194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.808363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.808513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.023 [2024-11-26 21:07:14.808724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.023 qpair failed and we were unable to recover it. 00:26:24.023 [2024-11-26 21:07:14.808838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.808868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.809861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.809891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.810011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.810053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.810253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.810301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.810453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.810483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.810607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.810636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.810832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.810881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.811947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.811992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.812920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.812947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.813887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.814094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.814123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.814304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.814333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.814488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.814515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.814626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.814652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.814881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.814908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.815121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.815168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.815320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.815349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.024 qpair failed and we were unable to recover it. 00:26:24.024 [2024-11-26 21:07:14.815500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.024 [2024-11-26 21:07:14.815529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.815679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.815812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.815839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.815952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.815979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.816860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.816886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.817856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.817882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.818837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.818864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.819911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.819937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.820148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.820295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.820470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.820644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.820818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.820986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.821013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.821206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.821360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.821390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.821505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.821534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.821680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.025 [2024-11-26 21:07:14.821711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.025 qpair failed and we were unable to recover it. 00:26:24.025 [2024-11-26 21:07:14.821818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.821844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.821992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.822169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.822353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.822502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.822657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.822853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.822880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.823893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.823919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.824891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.824918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.825939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.825983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.826146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.826172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.826433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.826462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.826578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.826608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.826767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.826795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.826913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.826939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.827962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.828105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.828149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.828293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.026 [2024-11-26 21:07:14.828322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.026 qpair failed and we were unable to recover it. 00:26:24.026 [2024-11-26 21:07:14.828473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.828500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.828677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.828713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.828836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.828866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.828989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.829175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.829390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.829575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.829739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.829912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.829941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.830952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.831176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.831320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.831480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.831669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.831847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.832161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.832296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.832470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.832655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.832829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.832873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.833874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.833900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.834866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.834892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.027 [2024-11-26 21:07:14.835096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.027 [2024-11-26 21:07:14.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.027 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.835262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.835289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.835444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.835473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.835583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.835612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.835773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.835919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.835961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.836111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.836294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.836433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.836674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.836865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.836979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.837906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.837933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.838908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.838935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.839832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.839859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.840815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.028 [2024-11-26 21:07:14.840841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.028 qpair failed and we were unable to recover it. 00:26:24.028 [2024-11-26 21:07:14.841005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.841165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.841337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.841505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.841676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.841886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.841917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.842891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.842917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.843879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.843905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.844848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.844998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.845190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.845330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.845516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.845727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.845890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.845917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.846131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.846296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.846492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.846663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.846837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.846973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.847161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.847324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.847512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.847691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.847876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.847903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.029 qpair failed and we were unable to recover it. 00:26:24.029 [2024-11-26 21:07:14.848016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.029 [2024-11-26 21:07:14.848060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.848248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.848276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.848408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.848435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.848572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.848617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.848781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.848809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.848927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.848954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.849906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.849933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.850965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.850992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.851368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.851528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.851703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.851857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.851995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.852859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.852996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.853911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.853937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.030 [2024-11-26 21:07:14.854961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.030 [2024-11-26 21:07:14.854990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.030 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.855119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.855165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.855304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.855334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.855524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.855661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.855714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.855842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.855872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.856878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.856906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.857951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.857997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.858141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.858168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.858315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.858359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.858499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.858529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.858673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.858711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.858823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.858850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.859871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.859997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.860900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.860927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.031 qpair failed and we were unable to recover it. 00:26:24.031 [2024-11-26 21:07:14.861855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.031 [2024-11-26 21:07:14.861882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.861993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.862886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.862991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.863153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.863322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.863515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.863719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.863876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.863902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.864884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.864911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.865900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.865927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.866907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.867135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.867322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.867483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.867670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.867831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.032 [2024-11-26 21:07:14.867988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.032 [2024-11-26 21:07:14.868018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.032 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.868954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.868982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.869886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.869912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.870877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.870904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.871063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.871090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.871198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.871226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.871386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.871413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.871640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.871670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.871858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.871885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.872905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.872932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.873986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.874175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.874340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.874531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.874711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.874868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.874894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.875047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.875077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.875215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.875242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.875383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.033 [2024-11-26 21:07:14.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.033 qpair failed and we were unable to recover it. 00:26:24.033 [2024-11-26 21:07:14.875572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.875602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.875727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.875754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.875857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.875883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.875991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.876128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.876274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.876466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.876678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.876858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.876885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.877848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.877997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.878165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.878297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.878490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.878680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.878825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.878853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.879898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.879925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.880865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.880891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.881878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.881905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.034 [2024-11-26 21:07:14.882897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.034 [2024-11-26 21:07:14.882923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.034 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.883089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.883130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.883259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.883286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.883422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.883448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.883633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.883664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.883847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.883873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.884056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.884086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.884232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.884262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.884428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.884455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.884654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.884691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.884824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.884851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.885890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.885917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.886894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.886922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.887904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.887931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.888871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.888898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.889867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.889895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.890031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.890059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.890192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.035 [2024-11-26 21:07:14.890388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.035 [2024-11-26 21:07:14.890415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.035 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.890532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.890558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.890661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.890696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.890808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.890834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.890989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.891970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.891997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.892156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.892186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.892306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.892337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.892544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.892574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.892722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.892750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.892863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.892891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.893852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.893879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.894928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.895894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.036 qpair failed and we were unable to recover it. 00:26:24.036 [2024-11-26 21:07:14.896839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.036 [2024-11-26 21:07:14.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.896976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.897145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.897326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.897470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.897662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.897848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.897875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.898920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.899145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.899307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.899457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.899632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.037 [2024-11-26 21:07:14.899792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.037 qpair failed and we were unable to recover it. 00:26:24.037 [2024-11-26 21:07:14.899921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.899948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.900850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.900877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.901036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.901066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.901220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.322 [2024-11-26 21:07:14.901247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.322 qpair failed and we were unable to recover it. 00:26:24.322 [2024-11-26 21:07:14.901358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.901384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.901527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.901557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.901683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.901717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.901844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.901871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.901996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.902905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.902931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.903843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.904862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.904975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.905877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.905999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.906169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.906384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.906537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.906749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.906886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.906913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.907046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.907073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.907221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.323 [2024-11-26 21:07:14.907250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.323 qpair failed and we were unable to recover it. 00:26:24.323 [2024-11-26 21:07:14.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.907441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.907601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.907628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.907786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.907813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.907948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.908905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.908934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.909908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.909934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.910918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.910944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.911953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.911980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.912177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.912362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.912531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.912736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.912885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.912997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.913024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.913133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.913160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.913319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.913346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.913497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.913528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.324 [2024-11-26 21:07:14.913662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.324 [2024-11-26 21:07:14.913696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.324 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.913836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.913863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.913981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.914137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.914282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.914474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.914733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.914877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.914904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.915915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.915941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.916920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.916947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.917878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.917906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.918871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.918992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.919179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.919321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.919548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.919714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.919879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.919905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.325 [2024-11-26 21:07:14.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.325 [2024-11-26 21:07:14.920068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.325 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.920225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.920252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.920361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.920388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.920551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.920581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.920738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.920902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.920928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.921880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.921924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.922901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.922928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.923874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.923901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.924864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.924890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.925929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.925955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.926117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.926301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.926327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.926470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.326 [2024-11-26 21:07:14.926514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.326 qpair failed and we were unable to recover it. 00:26:24.326 [2024-11-26 21:07:14.926692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.926722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.926852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.926878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.927915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.928847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.928874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.929928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.929955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.930929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.930955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.931126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.931282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.931453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.931631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.931852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.931976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-11-26 21:07:14.932006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.327 qpair failed and we were unable to recover it. 00:26:24.327 [2024-11-26 21:07:14.932115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.932143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.932286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.932315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.932467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.932494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.932675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.932721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.932891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.932919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.933859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.933986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.934919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.934946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.935874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.935901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.936892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.936919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.937881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.937908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.938021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.938048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.938207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.938234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.328 [2024-11-26 21:07:14.938350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.328 [2024-11-26 21:07:14.938377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.328 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.938487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.938514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.938676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.938711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.938863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.938891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.939865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.939999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.940215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.940387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.940552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.940712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.940895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.940921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.941079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.941277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.941473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.941633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.941855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.941975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.942868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.942894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.943872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.943979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.944006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.944166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.944196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.944326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.944352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.944490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.944520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.944667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.329 [2024-11-26 21:07:14.944706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.329 qpair failed and we were unable to recover it. 00:26:24.329 [2024-11-26 21:07:14.944838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.944865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.944994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.945935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.945966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.946946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.947915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.947942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.948088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.948117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.948273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.948299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.948471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.948675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.948818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.948845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.949890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.949994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.950962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.950990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.330 qpair failed and we were unable to recover it. 00:26:24.330 [2024-11-26 21:07:14.951179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.330 [2024-11-26 21:07:14.951210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.951343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.951371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.951533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.951577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.951706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.951754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.951887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.951914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.952873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.952981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.953879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.953992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.954876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.954985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.955176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.955343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.955511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.955714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.955916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.331 [2024-11-26 21:07:14.956921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-11-26 21:07:14.956948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.331 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.957935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.957962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.958862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.958888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.959874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.959903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.960883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.960909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.961849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.961983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.962174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.962381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.962732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.962896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.962922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.963053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.963080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.332 qpair failed and we were unable to recover it. 00:26:24.332 [2024-11-26 21:07:14.963183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-11-26 21:07:14.963209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.963346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.963372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.963511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.963648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.963674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.963815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.963966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.964861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.964889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.965935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.965964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.966959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.966985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.967889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.968871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.968915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.969096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.969142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.969294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.333 [2024-11-26 21:07:14.969469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-11-26 21:07:14.969495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.333 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.969597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.969624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.969764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.969808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.969970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.970172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.970220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.970472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.970519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.970641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.970670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.970821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.970848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.970978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.971179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.971328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.971509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.971694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.971870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.971897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.972947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.973157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.973518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.973662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.973804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.973985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.974153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.974342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.974544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.974725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.974860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.974886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.975916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.975945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.976059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.976089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.976236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.334 [2024-11-26 21:07:14.976266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.334 qpair failed and we were unable to recover it. 00:26:24.334 [2024-11-26 21:07:14.976398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.976426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.976574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.976603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.976785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.976913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.976942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.977086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.977116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.977310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.977369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.977508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.977537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.977701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.977747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.977900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.977945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.978078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.978129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.978266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.978311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.978506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.978558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.978679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.978731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.978871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.978899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.979996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.980122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.980151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.980304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.980334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.980483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.980515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.980723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.980752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.980885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.980934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.981888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.981915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.982877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.982905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.335 qpair failed and we were unable to recover it. 00:26:24.335 [2024-11-26 21:07:14.983035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.335 [2024-11-26 21:07:14.983081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.983204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.983249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.983414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.983575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.983602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.983735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.983780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.983905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.983935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.984074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.984474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.984669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.984832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.984998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.985952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.985980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.986113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.986142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.986356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.986385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.986530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.986560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.986678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.986732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.986838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.986882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.987836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.987987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.988143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.988318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.988517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.988711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.336 qpair failed and we were unable to recover it. 00:26:24.336 [2024-11-26 21:07:14.988849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.336 [2024-11-26 21:07:14.988876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.988992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.989393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.989753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.989919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.990905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.990931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.991915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.991942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.992895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.992921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.993881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.993907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.994920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.994947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.995106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.995150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.337 [2024-11-26 21:07:14.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.337 qpair failed and we were unable to recover it. 00:26:24.337 [2024-11-26 21:07:14.995520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.995565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.995677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.995713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.995849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.995896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.996093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.996258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.996399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.996547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.996967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.997838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.997987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.998938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.998973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:14.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:14.999955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.000934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.000961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.338 [2024-11-26 21:07:15.001938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.338 [2024-11-26 21:07:15.001964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.338 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.002933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.002960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.003282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.003493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.003665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.003828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.003994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.004173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.004347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.004552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.004747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.004911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.004938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.005081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.005124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.005270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.005299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.005426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.005455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.005696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.005726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.005856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.005882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.006889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.007918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.008120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.008164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.008311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.008460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.008490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.008645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.339 [2024-11-26 21:07:15.008675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.339 qpair failed and we were unable to recover it. 00:26:24.339 [2024-11-26 21:07:15.008819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.008846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.009854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.009980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.010938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.010964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.011127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.011285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.011485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.011678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.011845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.011975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.012954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.012981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.013176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.013384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.013536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.013681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.013876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.013995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.014185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.014391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.014547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.014709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.014864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.014890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.015025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.015051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.340 [2024-11-26 21:07:15.015212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.340 [2024-11-26 21:07:15.015242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.340 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.015380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.015407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.015553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.015579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.015732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.015762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.015915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.015941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.016881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.017903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.017929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.018077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.018106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.018286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.018315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.018493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.018519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.018668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.018707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.018893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.019992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.020125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.020152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.020311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.020354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.020503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.020653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.020682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.020856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.020883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.021039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.021067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.341 qpair failed and we were unable to recover it. 00:26:24.341 [2024-11-26 21:07:15.021246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.341 [2024-11-26 21:07:15.021273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.021450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.021479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.021588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.021616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.021740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.021766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.021908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.022906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.022932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.023897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.023938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.024958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.025170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.025372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.025507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.025673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.025839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.025974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.026174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.026338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.026501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.026691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.026905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.026932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.027108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.027137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.027272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.027301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.027481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.027507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.027664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.027698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.342 [2024-11-26 21:07:15.027884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.342 [2024-11-26 21:07:15.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.342 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.028872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.028902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.029908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.029938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.030965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.030992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.031126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.031151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.031286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.031331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.031458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.031487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.031633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.031659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.031804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.031831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.032044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.032070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.032180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.032209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.032345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.032373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.035895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.036946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.036975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.037128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.037153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.037311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.037354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.037525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.037552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.037678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.037720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.037900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.343 [2024-11-26 21:07:15.037929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.343 qpair failed and we were unable to recover it. 00:26:24.343 [2024-11-26 21:07:15.038084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.038267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.038408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.038568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.038779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.038963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.038991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.039114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.039143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.039276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.039302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.039439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.039653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.039683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.039841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.039867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.040903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.040929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.041917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.041943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.042120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.042148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.042324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.042505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.042530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.042716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.042746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.042926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.042955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.043109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.043298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.043489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.043674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.043850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.043998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.044027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.044214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.044240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.044422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.044450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.044622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.044651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.044790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.044816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.044977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.344 [2024-11-26 21:07:15.045003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.344 qpair failed and we were unable to recover it. 00:26:24.344 [2024-11-26 21:07:15.045159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.045192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.045376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.045402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.045562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.045712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.045742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.045900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.045926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.046938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.046965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.047139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.047320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.047503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.047683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.047851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.047983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.048135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.048274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.048482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.048683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.048874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.048904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.049950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.050092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.050120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.050279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.050324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.050498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.050526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.050705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.050732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.050853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.050883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.051061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.345 [2024-11-26 21:07:15.051090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.345 qpair failed and we were unable to recover it. 00:26:24.345 [2024-11-26 21:07:15.051239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.051264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.051402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.051429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.051546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.051574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.051712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.051738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.051869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.051997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.052179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.052338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.052537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.052710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.052924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.053129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.053311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.053490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.053672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.053854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.053980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.054172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.054330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.054512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.054708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.054890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.055096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.055125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.055250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.055466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.055493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.055672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.055708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.055834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.055864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.056933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.346 [2024-11-26 21:07:15.057960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.346 [2024-11-26 21:07:15.057990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.346 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.058128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.058155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.058294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.058321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.058490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.058519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.058669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.058865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.058909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.059888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.059915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.060131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.060286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.060467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.060614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.060828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.060979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.061185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.061338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.061745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.061958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.061984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.062135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.062164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.062349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.062376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.062484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.062511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.062675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.062726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.062890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.062917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.063906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.063932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.347 qpair failed and we were unable to recover it. 00:26:24.347 [2024-11-26 21:07:15.064931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.347 [2024-11-26 21:07:15.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.065932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.065958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.066954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.066983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.067168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.067377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.067563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.067726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.067882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.067989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.068146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.068306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.068490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.068635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.069089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.069281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.069416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.069802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.069979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.070152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.070360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.070487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.070682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.070880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.070906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.071060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.071089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.071229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.071259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.071419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.071445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.348 qpair failed and we were unable to recover it. 00:26:24.348 [2024-11-26 21:07:15.071619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.348 [2024-11-26 21:07:15.071648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.071805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.071972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.071998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.072970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.072997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.073129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.073172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.073319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.073349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.073509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.073535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.073641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.073668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.073811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.073841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.074889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.074915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.075894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.075923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.076077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.076103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.076242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.076268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.076404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.076431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.076630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.076657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.076830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.076860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.077935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.077961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.078118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.078147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.349 [2024-11-26 21:07:15.078305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.349 [2024-11-26 21:07:15.078331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.349 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.078463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.078623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.078781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.078808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.078962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.078992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.079136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.079166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.079321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.079509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.079536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.079666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.079698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.079873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.079899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.080038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.080080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.080263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.080293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.080449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.080476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.080661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.080710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.080899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.080925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.081085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.081220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.081264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.081411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.081445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.081617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.081647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.081836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.081863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.082948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.083095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.083124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.083279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.083310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.083488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.083517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.083699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.083726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.083890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.083916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.084097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.084126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.084279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.084521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.350 [2024-11-26 21:07:15.084652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.350 [2024-11-26 21:07:15.084704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.350 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.084848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.084878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.085902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.086085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.086292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.086858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.086978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.087911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.087940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.088951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.088996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.089116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.089143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.089277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.089304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.089495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.089674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.089709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.089891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.089920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.090060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.090089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.090273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.090299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.090448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.090477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.090649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.090678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.090819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.090846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.091007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.091049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.091197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.091226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.091366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.351 [2024-11-26 21:07:15.091393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.351 qpair failed and we were unable to recover it. 00:26:24.351 [2024-11-26 21:07:15.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.091531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.091669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.091703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.091847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.091952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.091979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.092942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.092972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.093150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.093367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.093522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.093729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.093884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.093996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.094180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.094332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.094456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.094853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.094880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.095891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.095918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.096887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.096915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.097097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.097272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.097450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.097638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.097815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.352 [2024-11-26 21:07:15.097974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.352 [2024-11-26 21:07:15.098000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.352 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.098147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.098176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.098351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.098380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.098555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.098581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.098698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.098743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.098886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.098915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.099128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.099311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.099488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.099643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.099825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.099990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.100963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.100993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.101167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.101310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.101470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.101709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.101891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.101992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.102157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.102303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.102520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.102680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.102860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.102902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.103881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.103907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.104062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.104091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.104244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.104271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.104405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.104431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.104571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.104597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.353 qpair failed and we were unable to recover it. 00:26:24.353 [2024-11-26 21:07:15.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.353 [2024-11-26 21:07:15.104768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.104878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.104905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.105892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.105936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.106891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.106932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.107964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.107991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.108186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.108212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.108368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.108394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.108537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.108566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.108701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.108745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.108886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.108912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.109875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.109902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.110058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.110088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.110260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.110289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.110446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.110473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.354 qpair failed and we were unable to recover it. 00:26:24.354 [2024-11-26 21:07:15.110603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.354 [2024-11-26 21:07:15.110646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.110845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.110872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.111914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.111944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.112857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.112884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.113904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.113930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.114861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.114891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.115874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.115903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.116897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.116931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.117093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.355 [2024-11-26 21:07:15.117120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.355 qpair failed and we were unable to recover it. 00:26:24.355 [2024-11-26 21:07:15.117248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.117290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.117479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.117505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.117663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.117702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.117826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.117855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.118829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.118856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.119896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.119923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.120891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.120918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.121916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.121942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.122881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.123028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.123057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.123212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.123238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.123348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.123497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.123527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.356 [2024-11-26 21:07:15.123694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.356 [2024-11-26 21:07:15.123721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.356 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.123837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.123868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.123984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.124949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.124979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.125194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.125335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.125494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.125657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.125854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.125979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.126846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.126997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.127185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.127319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.127507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.127692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.127828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.127855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.128862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.128889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.129955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.129985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.130138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.357 [2024-11-26 21:07:15.130164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.357 qpair failed and we were unable to recover it. 00:26:24.357 [2024-11-26 21:07:15.130322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.130349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.130446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.130473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.130612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.130637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.130778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.130805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.130936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.130980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.131929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.131955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.132903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.132930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.133950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.133991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.134894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.134921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.135863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.136018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.136184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.136210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.136334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.136360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.358 [2024-11-26 21:07:15.136541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-11-26 21:07:15.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.358 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.136728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.136885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.136915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.137901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.137928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.138924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.138951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.139942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.139969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.140138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.140167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.140326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.140352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.140463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.140506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.140681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.140734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.140843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.140869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.141904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.141930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.142047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.142073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.142242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.142275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.142412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-11-26 21:07:15.142438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.359 qpair failed and we were unable to recover it. 00:26:24.359 [2024-11-26 21:07:15.142570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.142597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.142784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.142813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.142935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.142962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.143969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.143996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.144963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.144990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.145970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.145997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.146971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.146997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.147160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.147312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.147495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.147677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.147866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.147994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.360 qpair failed and we were unable to recover it. 00:26:24.360 [2024-11-26 21:07:15.148918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-11-26 21:07:15.148944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.149935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.149962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.150942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.151948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.151975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.152923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.152949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.153880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.153906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.154954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.154980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.155112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.155154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.155326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.361 [2024-11-26 21:07:15.155355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.361 qpair failed and we were unable to recover it. 00:26:24.361 [2024-11-26 21:07:15.155489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.155515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.155630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.155656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.155797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.155824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.155940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.155967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.156889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.156915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.157865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.158931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.159978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.160839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.161018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.161151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.161177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.161289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.161315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.362 qpair failed and we were unable to recover it. 00:26:24.362 [2024-11-26 21:07:15.161506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.362 [2024-11-26 21:07:15.161532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.161636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.161662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.161768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.161795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.161967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.161993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.162154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.162275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.162302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.162526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.162661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.162701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.162828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.162873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.163939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.163981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.164185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.164343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.164500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.164663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.164841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.164982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.165166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.165372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.165532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.165713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.165873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.165903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.166911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.167057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.167101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.167286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.363 [2024-11-26 21:07:15.167447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.363 [2024-11-26 21:07:15.167474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.363 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.167630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.167657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.167823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.167869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.168900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.168929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.169128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.169298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.169462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.169630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.169823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.169985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.170167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.170330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.170469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.170652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.170869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.170914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.171883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.171932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.172125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.172154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.172335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.172379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.172514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.172540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.172641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.172667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.172823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.172867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.173903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.173951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.174107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.174151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.364 qpair failed and we were unable to recover it. 00:26:24.364 [2024-11-26 21:07:15.174261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.364 [2024-11-26 21:07:15.174289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.174412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.174576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.174603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.174745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.174917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.174962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.175830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.175857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.176835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.176879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.177876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.177921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.178926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.178953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.179968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.179995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.365 [2024-11-26 21:07:15.180949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.365 [2024-11-26 21:07:15.180976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.365 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.181928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.181973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.182935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.182962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.183907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.183937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.184103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.184323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.184367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.184498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.184529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.184711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.184739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.184851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.184878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.185868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.185895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.186099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.186279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.186472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.186673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.186871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.186993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.187133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.187312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.187520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.187703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.187855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.188058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.366 [2024-11-26 21:07:15.188090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.366 qpair failed and we were unable to recover it. 00:26:24.366 [2024-11-26 21:07:15.188238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.188268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.188439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.188483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.188616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.188787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.188833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.189963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.189990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.190913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.190940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.191890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.191935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.192080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.192124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.192327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.192505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.192533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.192679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.192733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.192864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.192896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.193034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.193077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.193248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.193299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.193497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.193555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.193727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.193754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.193899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.193928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.194105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.194134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.194278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.194307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.194501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.194555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.194723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.194750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.194859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.194886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.195067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.195096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.195236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.195294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.195463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.195624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.195653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.367 qpair failed and we were unable to recover it. 00:26:24.367 [2024-11-26 21:07:15.195815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.367 [2024-11-26 21:07:15.195869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.196039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.196071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.196278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.196329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.196501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.196554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.196776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.196804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.196983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.197139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.197747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.197929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.197958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.198526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.198710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.198869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.198995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.199195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.199540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.199701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.199915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.200921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.200951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.201098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.201128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.201313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.201500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.201547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.368 qpair failed and we were unable to recover it. 00:26:24.368 [2024-11-26 21:07:15.201665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.368 [2024-11-26 21:07:15.201701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.201865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.201904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.202085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.202287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.202471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.202642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.202817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.202988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.203182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.203367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.203499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.203695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.203882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.203928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.204057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.204235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.204468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.204599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.204821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.204975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.205955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.205985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.206122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.206151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.206329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.206375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.206509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.206536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.206665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.206706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.206880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.207039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.207084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.207264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.369 [2024-11-26 21:07:15.207309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.369 qpair failed and we were unable to recover it. 00:26:24.369 [2024-11-26 21:07:15.207414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.207442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.207544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.207576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.207673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.207709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.207902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.208088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.208118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.208359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.208411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.208552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.208579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.208764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.208952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.209910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.209937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.210097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.210272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.210302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.210474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.210505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.210644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.210698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.210843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.210872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.211093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.211243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.211462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.211629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.211820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.211957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.212328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.212526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.212700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.212845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.212872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.213052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.213097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.213248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.213298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.213508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.213557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.213734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.213887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.213932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.214085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.214130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.214297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.370 [2024-11-26 21:07:15.214351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.370 qpair failed and we were unable to recover it. 00:26:24.370 [2024-11-26 21:07:15.214483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.214510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.214608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.214635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.214747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.214776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.214889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.214916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.215837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.215868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.216140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.216307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.216656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.216806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.216985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.217159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.217363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.217516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.217729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.217873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.217900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.218890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.218917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.219928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.219971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.220090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.220120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.220273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.220303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.220459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.220489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.220635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.371 [2024-11-26 21:07:15.220773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.371 [2024-11-26 21:07:15.220800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.371 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.220938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.220985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.221171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.221198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.221414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.221444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.221558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.221588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.221738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.221869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.221895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.222872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.222987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.223179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.223376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.223550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.223742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.223924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.223954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.224942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.225109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.225140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.225258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.225288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.225487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.225516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.225673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.225711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.225863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.225890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.226024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.226083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.226303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.226492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.226537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.226671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.226708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.226848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.226900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.227077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.227125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.227270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.227344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.227479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.372 qpair failed and we were unable to recover it. 00:26:24.372 [2024-11-26 21:07:15.227591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.372 [2024-11-26 21:07:15.227620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.227760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.227787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.227921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.227948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.228117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.228145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.228261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.228428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.228455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.228581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.228610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.228799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.228845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.229830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.229875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.230876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.230920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.231912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.231939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.232858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.373 qpair failed and we were unable to recover it. 00:26:24.373 [2024-11-26 21:07:15.232977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.373 [2024-11-26 21:07:15.233006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.374 qpair failed and we were unable to recover it. 00:26:24.374 [2024-11-26 21:07:15.233132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.374 [2024-11-26 21:07:15.233161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.374 qpair failed and we were unable to recover it. 00:26:24.374 [2024-11-26 21:07:15.233273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.374 [2024-11-26 21:07:15.233301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.374 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.233427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.233454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.233588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.233621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.233800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.233829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.233965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.234836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.234969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.235325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.235512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.235735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.235877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.235904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.236044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.236274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.236485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.236520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.236700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.236875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-26 21:07:15.236904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.657 qpair failed and we were unable to recover it. 00:26:24.657 [2024-11-26 21:07:15.237020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.237150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.237377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.237576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.237765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.237905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.237934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.238874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.238901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.239934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.239978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.240155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.240185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.240362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.240392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.240554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.240581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.240718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.240751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.240927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.240972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.241095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.241135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.241332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.241386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.241529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.241571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.241744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.241772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.241902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.241930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.242903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.243073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.243100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.243270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.243301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.243453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.243606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-26 21:07:15.243638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.658 qpair failed and we were unable to recover it. 00:26:24.658 [2024-11-26 21:07:15.243800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.243828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.243968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.244002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.244166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.244219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.244467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.244517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.244664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.244698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.244807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.244836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.244985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.245015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.245212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.245265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.245413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.245440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.245619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.245664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.245842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.245872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.246908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.246935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.247087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.247133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.247286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.247331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.247475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.247523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.247710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.247747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.247852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.247879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.248113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.248302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.248522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.248704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.248853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.248994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.249365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.249548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.249738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.249879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.249907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.250074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.250130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.250317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.250369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.250571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.250620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.250790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.250817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.250935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-26 21:07:15.250967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.659 qpair failed and we were unable to recover it. 00:26:24.659 [2024-11-26 21:07:15.251080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.251267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.251408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.251600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.251752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.251885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.251912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.252043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.252088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.252234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.252278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.252458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.252509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.252683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.252743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.252857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.252884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.253090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.253281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.253695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.253838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.253969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.254143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.254321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.254510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.254701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.254901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.254928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.255089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.255135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.255287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.255331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.255550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.255595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.255743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.255771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.255906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.255938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.256093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.256143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.256310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.256359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.256544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.256695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.256734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.256852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.256878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.257074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.257101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.257291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.257320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.257493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.257523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.257632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.257662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.257851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.258050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.258096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.660 [2024-11-26 21:07:15.258233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.660 [2024-11-26 21:07:15.258279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.660 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.258440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.258487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.258621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.258662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.258834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.258863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.259014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.259045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.259288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.259338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.259530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.259580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.259746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.259774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.259913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.259942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.260092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.260121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.260254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.260297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.260480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.260532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.260653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.260683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.260852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.260897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.261087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.261271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.261419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.261656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.261866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.261982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.262030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.262242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.262293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.262458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.262508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.262656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.262693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.262850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.262982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.263864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.263989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.264156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.264539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.264701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.264863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.661 [2024-11-26 21:07:15.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.661 qpair failed and we were unable to recover it. 00:26:24.661 [2024-11-26 21:07:15.265030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.265206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.265397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.265596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.265728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.265892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.265919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.266956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.266983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.267113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.267143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.267263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.267293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.267465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.267495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.267637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.267677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.267836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.267866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.268027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.268245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.268296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.268480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.268531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.268701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.268737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.268869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.268913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.269059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.269103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.269238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.269266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.269491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.269540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.269700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.269736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.269899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.269942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.270106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.270151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.270287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.270332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.270452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.270493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.270636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.270666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.270859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.270905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.271149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.271201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.271387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.271438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.271593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.271624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.662 [2024-11-26 21:07:15.271813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.662 [2024-11-26 21:07:15.271842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.662 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.271957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.271985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.272955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.272986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.273163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.273193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.273323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.273366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.273513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.273544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.273675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.273740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.273944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.274125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.274501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.274671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.274864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.274973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.275129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.275345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.275525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.275691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.275875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.276128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.276176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.276332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.276368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.276494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.276521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.276640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.276669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.276846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.276890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.277051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.277083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.277267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.277526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.277576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.277751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.277779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.277927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.277957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.278096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.278127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.278300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.278330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.278469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.278519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.278632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.278662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.278817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.663 [2024-11-26 21:07:15.278865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.663 qpair failed and we were unable to recover it. 00:26:24.663 [2024-11-26 21:07:15.279005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.279859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.279986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.280192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.280399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.280580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.280755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.280902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.280931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.281109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.281158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.281329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.281378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.281532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.281562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.281721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.281750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.281872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.281918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.282073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.282118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.282302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.282332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.282493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.282520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.282629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.282657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.282830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.282861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.283068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.283251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.283301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.283534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.283584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.283711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.283756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.283907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.283947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.284165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.284216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.284424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.284474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.284620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.284653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.284774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.284942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.284986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.285143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.285385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.285436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.285597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.285626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.285761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.285789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.285917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.285943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.286085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.664 [2024-11-26 21:07:15.286113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.664 qpair failed and we were unable to recover it. 00:26:24.664 [2024-11-26 21:07:15.286243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.286273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.286427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.286457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.286637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.286668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.286835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.286861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.286996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.287159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.287355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.287598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.287907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.287934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.288132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.288300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.288435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.288623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.288802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.288962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.289872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.289991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.290358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.290536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.290721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.290862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.290889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.291883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.291910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.292085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.292115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.292268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.292298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.292484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.292527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.292684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.292741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.292880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.292910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.665 [2024-11-26 21:07:15.293077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.665 [2024-11-26 21:07:15.293104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.665 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.293245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.293273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.293561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.293624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.293747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.293914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.294090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.294119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.294285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.294323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.294517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.294700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.294746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.294888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.294915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.295108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.295296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.295498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.295642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.295836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.295999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.296179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.296314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.296516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.296658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.296826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.296853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.297073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.297277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.297453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.297639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.297841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.298201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.298369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.298545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.298782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.298939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.298968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.299137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.666 [2024-11-26 21:07:15.299183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.666 qpair failed and we were unable to recover it. 00:26:24.666 [2024-11-26 21:07:15.299346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.299390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.299545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.299591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.299723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.299751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.299930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.299961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.300139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.300184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.300320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.300367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.300505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.300532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.300674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.300711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.300873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.300919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.301079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.301124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.301247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.301278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.301426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.301456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.301599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.301626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.301806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.301852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.302075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.302296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.302451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.302595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.302822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.302980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.303166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.303330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.303515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.303677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.303902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.303932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.304104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.304264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.304294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.304434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.304463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.304616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.304646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.304839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.304867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.305028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.305073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.305226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.305274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.305458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.305503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.305665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.305701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.305857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.305907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.306032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.306062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.306205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.306236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.306383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.667 [2024-11-26 21:07:15.306414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.667 qpair failed and we were unable to recover it. 00:26:24.667 [2024-11-26 21:07:15.306537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.306567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.306699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.306727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.306831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.306876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.307050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.307311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.307365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.307516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.307546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.307712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.307762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.307885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.307916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.308984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.309107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.309155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.309309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.309339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.309538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.309568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.309729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.309757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.309872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.309902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.310881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.310926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.311933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.311963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.312232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.312381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.312559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.312589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.312752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.312782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.312931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.312961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.313136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.313181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.313367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.313412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.668 [2024-11-26 21:07:15.313571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.668 [2024-11-26 21:07:15.313598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.668 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.313752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.313783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.313954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.314134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.314342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.314523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.314692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.314871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.314917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.315122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.315352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.315515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.315652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.315834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.315984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.316191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.316324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.316515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.316735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.316908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.316940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.317115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.317298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.317504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.317656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.317835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.317964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.318011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.318165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.318210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.318369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.318414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.318578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.318605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.318768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.318814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.318998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.319043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.319223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.319276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.319459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.319504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.319639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.319876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.319922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.320080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.320124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.320309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.320355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.320493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.320521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.320655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.320684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.320883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.669 [2024-11-26 21:07:15.320927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.669 qpair failed and we were unable to recover it. 00:26:24.669 [2024-11-26 21:07:15.321106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.321153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.321308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.321352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.321491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.321519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.321677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.321712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.321897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.321941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.322070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.322125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.322275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.322319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.322452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.322481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.322651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.322678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.322846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.322890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.323042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.323087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.323255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.323300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.323433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.323460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.323618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.323646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.323838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.323882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.324969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.324999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.325119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.325146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.325312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.325343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.325485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.325514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.325632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.325662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.325859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.325901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.326086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.326291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.326493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.326690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.326849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.326993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.327193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.327381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.327601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.327766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.327931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.328117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.670 [2024-11-26 21:07:15.328147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.670 qpair failed and we were unable to recover it. 00:26:24.670 [2024-11-26 21:07:15.328346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.328376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.328548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.328578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.328759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.328800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.328949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.328978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.329130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.329175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.329361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.329406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.329545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.329572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.329680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.329714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.329846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.329894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.330076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.330121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.330308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.330353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.330490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.330517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.330681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.330713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.330825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.330853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.331054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.331282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.331443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.331853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.331979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.332011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.332161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.332199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.332423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.332480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.332631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.332661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.332798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.332843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.333913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.333940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.334167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.334197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.334348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.334378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.334528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.334557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.334694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.334722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.671 qpair failed and we were unable to recover it. 00:26:24.671 [2024-11-26 21:07:15.334884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.671 [2024-11-26 21:07:15.334911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.335098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.335128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.335260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.335304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.335454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.335484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.335623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.335652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.335822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.335849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.336031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.336061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.336222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.336271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.336398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.336443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.336615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.336645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.336818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.336846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.337843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.337870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.338964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.338992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.339094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.339123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.339277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.339325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.339488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.339515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.339619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.339646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.339848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.340060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.340105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.340229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.340274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.340446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.340473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.340638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.340666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.340810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.340866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.341034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.341066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.341217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.341247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.341396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.341426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.341600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.341630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.341819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.341847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.672 [2024-11-26 21:07:15.342046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.672 [2024-11-26 21:07:15.342102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.672 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.342275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.342304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.342454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.342499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.342659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.342693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.342846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.342892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.343083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.343114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.343313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.343357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.343517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.343544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.343645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.343672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.343830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.343865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.344050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.344230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.344492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.344650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.344856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.344985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.345165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.345346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.345550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.345749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.345921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.345949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.346943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.346985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.347112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.347155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.347266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.347296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.347450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.347479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.347636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.347663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.347824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.347852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.348927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.348954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.349116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.673 [2024-11-26 21:07:15.349146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.673 qpair failed and we were unable to recover it. 00:26:24.673 [2024-11-26 21:07:15.349321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.349351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.349466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.349495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.349637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.349664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.349832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.349859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.350048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.350228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.350469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.350668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.350854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.350985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.351013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.351171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.351201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.351326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.351353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.351577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.351607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.351767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.351794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.351958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.352146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.352333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.352478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.352700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.352909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.352938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.353081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.353109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.353295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.353339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.353511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.353539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.353702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.353730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.353877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.353908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.354108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.354306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.354484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.354791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.354994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.355231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.355595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.355767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.355930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.355973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.356137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.356167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.356297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.356324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.674 [2024-11-26 21:07:15.356515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.674 [2024-11-26 21:07:15.356544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.674 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.356705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.356733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.356874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.356902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.357061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.357263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.357438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.357619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.357823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.358020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.358199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.358228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.358384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.358433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.358621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.358802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.358848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.359042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.359072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.359213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.359259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.359444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.359491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.359625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.359653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.359796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.359824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.360007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.360037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.360299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.360352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.360461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.360499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.360647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.360677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.360840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.360871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.361854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.361881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.362061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.362266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.362296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.362460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.362490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.362636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.362833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.362872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.363021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.675 [2024-11-26 21:07:15.363049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.675 qpair failed and we were unable to recover it. 00:26:24.675 [2024-11-26 21:07:15.363200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.363229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.363373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.363405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.363585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.363615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.363768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.363795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.363908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.363935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.364948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.364973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.365145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.365172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.365336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.365371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.365513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.365543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.365720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.365748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.365878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.365904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.366940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.367152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.367307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.367667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.367825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.367964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.368157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.368316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.368537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.368701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.368886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.368913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.676 qpair failed and we were unable to recover it. 00:26:24.676 [2024-11-26 21:07:15.369895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.676 [2024-11-26 21:07:15.369921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.370267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.370442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.370652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.370821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.370992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.371166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.371318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.371522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.371700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.371875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.371902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.372076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.372281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.372635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.372804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.372964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.373161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.373361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.373556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.373704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.373903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.373932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.374936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.374966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.375111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.375142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.375265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.375294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.375468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.375640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.375670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.375817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.375846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.376054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.376226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.376455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.376618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.677 [2024-11-26 21:07:15.376799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.677 qpair failed and we were unable to recover it. 00:26:24.677 [2024-11-26 21:07:15.376984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.377212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.377385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.377521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.377679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.377907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.377949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.378188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.378239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.378365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.378571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.378601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.378738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.378767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.378946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.378974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.379160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.379204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.379399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.379571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.379599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.379738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.379768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.379933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.379963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.380121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.380151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.380328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.380373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.380490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.380672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.380742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.380870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.380915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.381920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.381950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.382106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.382302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.382450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.382646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.382791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.382978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.383161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.383364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.383540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.383696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.678 [2024-11-26 21:07:15.383927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.678 [2024-11-26 21:07:15.383957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.678 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.384989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.385123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.385169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.385318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.385363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.385493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.385520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.385653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.385680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.385833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.385860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.386984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.387820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.387995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.388169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.388337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.388490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.388638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.388867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.388913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.389093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.389330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.389493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.389635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.389844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.389961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.390006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.390156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.390201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.390310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.390338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.679 [2024-11-26 21:07:15.390478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.679 qpair failed and we were unable to recover it. 00:26:24.679 [2024-11-26 21:07:15.390612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.390639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.390786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.390816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.390962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.390991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.391159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.391311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.391468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.391651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.391988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.392961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.392988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.393934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.393965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.394162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.394211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.394367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.394411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.394525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.394553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.394669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.394709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.394822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.394849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.395860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.395978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.396021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.680 [2024-11-26 21:07:15.396181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.680 [2024-11-26 21:07:15.396208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.680 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.396320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.396347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.396461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.396488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.396597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.396624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.396742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.396769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.396910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.396937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.397869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.397899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.398914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.399943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.399973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.400099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.400129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.400278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.400307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.400440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.400488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.400640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.400667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.400812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.400839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.401866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.401892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.402047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.402076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.402191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.402220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.402361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.402390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.402538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.681 [2024-11-26 21:07:15.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.681 qpair failed and we were unable to recover it. 00:26:24.681 [2024-11-26 21:07:15.402709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.402754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.402892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.402919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.403868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.403895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.404954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.404998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.405139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.405293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.405495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.405671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.405845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.405978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.406140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.406341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.406520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.406653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.406854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.406881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.407865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.407892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.408816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.408957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.409002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.682 qpair failed and we were unable to recover it. 00:26:24.682 [2024-11-26 21:07:15.409155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.682 [2024-11-26 21:07:15.409200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.409309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.409336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.409444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.409472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.409607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.409634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.409790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.409840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.410837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.410881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.411885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.411917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.412849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.412879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.413880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.413925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.414879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.414906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.415034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.415193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.415224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.683 [2024-11-26 21:07:15.415482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.683 [2024-11-26 21:07:15.415512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.683 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.415640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.415669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.415838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.415865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.415979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.416143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.416300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.416511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.416703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.416863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.416890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.417047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.417287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.417493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.417647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.417816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.417983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.418182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.418349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.418508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.418676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.418839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.418883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.419961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.419991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.420192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.420224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.420395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.420441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.420576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.420603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.420752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.420799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.420959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.421004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.421134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.421180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.421338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.421382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.421530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.684 [2024-11-26 21:07:15.421558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.684 qpair failed and we were unable to recover it. 00:26:24.684 [2024-11-26 21:07:15.421733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.421793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.421923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.421960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.422369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.422399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.422551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.422581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.422753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.422780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.422916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.422957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.423104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.423134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.423307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.423337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.423516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.423562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.423677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.423725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.423851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.423895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.424871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.424899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.425031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.425060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.425290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.425321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.425452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.425495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.425677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.425715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.425890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.425920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.426907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.426934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.427085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.427277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.427638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.427848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.427994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.428022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.428218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.428418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.428473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.428637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.428665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.685 qpair failed and we were unable to recover it. 00:26:24.685 [2024-11-26 21:07:15.428853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.685 [2024-11-26 21:07:15.428882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.429848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.429876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.430875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.430906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.431948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.431999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.432145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.432177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.432317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.432348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.432487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.432533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.432729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.432758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.432882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.432912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.433120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.433336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.433512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.433678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.433853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.433994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.434178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.434326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.434522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.434701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.434849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.434880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.435090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.435150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.435310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.435341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.435491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.435519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.686 [2024-11-26 21:07:15.435630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.686 [2024-11-26 21:07:15.435658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.686 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.435792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.435838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.435993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.436183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.436380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.436528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.436682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.436866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.436896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.437080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.437111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.437340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.437590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.437620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.437790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.437820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.437965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.438870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.439927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.439961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.440933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.440960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.441926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.442093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.442123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.687 [2024-11-26 21:07:15.442295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.687 qpair failed and we were unable to recover it. 00:26:24.687 [2024-11-26 21:07:15.442472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.442502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.442621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.442651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.442793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.442820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.442941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.443884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.443990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.444158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.444335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.444482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.444665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.444856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.444883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.445061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.445254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.445504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.445679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.445872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.445988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.446158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.446343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.446541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.446703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.446888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.447968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.447994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.448950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.688 [2024-11-26 21:07:15.448993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.688 qpair failed and we were unable to recover it. 00:26:24.688 [2024-11-26 21:07:15.449133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.449159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.449324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.449369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.449540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.449570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.449767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.449794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.449905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.449932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.450928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.450961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.451148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.451176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.451336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.451364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.451717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.451744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.451871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.451898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.452917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.452949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.453874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.453991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.454123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.454281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.454459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.454661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.454841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.454867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.455005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.689 [2024-11-26 21:07:15.455032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.689 qpair failed and we were unable to recover it. 00:26:24.689 [2024-11-26 21:07:15.455146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.455173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.455310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.455336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.455499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.455529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.455729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.455763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.455897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.455925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.456867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.456894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.458360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.458497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.458649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.458876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.459913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.460881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.460910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.461054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.461083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.461191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.461219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.461351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.690 [2024-11-26 21:07:15.461378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.690 qpair failed and we were unable to recover it. 00:26:24.690 [2024-11-26 21:07:15.461482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.461509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.461652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.461683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.461837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.461864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.462901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.462927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.463921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.463950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.464954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.464981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.465143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.465187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.465329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.465358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.465532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.465698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.465737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.465849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.465876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.466959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.466987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.691 [2024-11-26 21:07:15.467954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.691 [2024-11-26 21:07:15.467980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.691 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.468846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.468872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.469931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.469957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.470856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.470884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.471870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.471897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.472870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.472898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.473909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.473936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.474078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.474120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.692 qpair failed and we were unable to recover it. 00:26:24.692 [2024-11-26 21:07:15.474308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.692 [2024-11-26 21:07:15.474334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.474443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.474470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.474611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.474638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.474786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.474813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.474916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.474943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.475911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.475938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.476877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.476984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.477949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.477980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.478893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.478920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.479045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.479158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.479185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.479286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.479312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.479446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.479473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.693 qpair failed and we were unable to recover it. 00:26:24.693 [2024-11-26 21:07:15.479581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.693 [2024-11-26 21:07:15.479608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.479716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.479743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.479908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.479938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.480932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.480958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.481154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.481180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.481290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.481336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.481469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.481498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.481650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.481677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.481806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.481833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.482873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.482917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.483879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.483906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.484869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.484895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.485963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.694 [2024-11-26 21:07:15.485990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.694 qpair failed and we were unable to recover it. 00:26:24.694 [2024-11-26 21:07:15.486132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.486159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.486316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.486432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.486461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.486594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.486621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.486753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.486780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.486972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.487155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.487280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.487461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.487671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.487864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.487894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.488910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.488936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.489137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.489300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.489482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.489854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.489967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.490148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.490361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.490494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.490704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.490867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.490895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.491111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.491301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.491484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.491843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.491980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.492007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.695 qpair failed and we were unable to recover it. 00:26:24.695 [2024-11-26 21:07:15.492125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.695 [2024-11-26 21:07:15.492152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.492267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.492295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.492430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.492457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.492597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.492626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.492802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.492830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.492943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.492970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.493153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.493299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.493491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.493625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.493849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.493986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.494165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.494339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.494715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.494905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.494948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.495884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.495919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.496112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.496254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.496454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.496654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.496820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.496997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.497027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.497188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.497216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.497337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.497365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.497520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.497553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.497712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.696 [2024-11-26 21:07:15.497749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.696 qpair failed and we were unable to recover it. 00:26:24.696 [2024-11-26 21:07:15.497857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.497884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.498955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.498988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.499920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.499957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.500117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.500288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.500459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.500657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.500844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.500982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.501120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.501329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.501498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.501647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.501821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.501849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.502054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.502283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.502483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.502649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.502845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.502956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.503180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.503381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.503572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.503799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.503961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.503989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.504123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.504335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.504365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.697 [2024-11-26 21:07:15.504518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.697 [2024-11-26 21:07:15.504549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.697 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.504709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.504756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.504898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.504928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.505180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.505344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.505502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.505725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.505855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.505989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.506164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.506363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.506569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.506733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.506865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.506892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.507072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.507130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.507352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.507384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.507515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.507559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.507732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.507762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.507924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.507952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.508172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.508202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.508450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.508501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.508683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.508720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.508833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.508866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.509000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.509027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.509166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.509209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.509501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.509550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.509719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.509749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.698 [2024-11-26 21:07:15.509923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.698 qpair failed and we were unable to recover it. 00:26:24.698 [2024-11-26 21:07:15.510105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.510140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.510308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.510505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.510536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.510845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.510872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.511044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.511278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.511464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.511631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.511821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.511956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.512122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.512284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.512765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.512907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.512936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.513093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.513279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.513489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.513642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.513872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.513982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.514179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.514359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.514502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.514698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.514864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.514891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.515965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.516110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.516137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.699 [2024-11-26 21:07:15.516272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.699 [2024-11-26 21:07:15.516300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.699 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.516443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.516471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.516663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.516806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.516833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.516966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.516994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.517157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.517185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.517319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.517347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.517480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.517508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.517711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.517893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.517921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.518023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.518052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.518186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.518213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.518478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.518530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.518768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.518795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.518935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.518973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.519144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.519175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.519330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.519509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.519538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.519737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.519778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.519914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.519939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.520052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.520094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.520269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.520299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.520431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.520619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.520649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.520835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.520875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.521968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.521997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.522127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.522154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.522282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.522309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.522505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.522533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 [2024-11-26 21:07:15.522647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.700 [2024-11-26 21:07:15.522674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.700 qpair failed and we were unable to recover it. 00:26:24.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4086109 Killed "${NVMF_APP[@]}" "$@" 00:26:24.700 [2024-11-26 21:07:15.522829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.522856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.523041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:24.701 [2024-11-26 21:07:15.523196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:24.701 [2024-11-26 21:07:15.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.701 [2024-11-26 21:07:15.523565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.701 [2024-11-26 21:07:15.523746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.701 [2024-11-26 21:07:15.523883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.523909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.524864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.524998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.525139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.525333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.525575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.525748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.525918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.525946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.526137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.526296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.526449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.526636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.526798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.526959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.527181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.527340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.527700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.701 [2024-11-26 21:07:15.527926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.701 [2024-11-26 21:07:15.527953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.701 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.528264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.528461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.528593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4086670 00:26:24.702 [2024-11-26 21:07:15.528805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:24.702 addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4086670 00:26:24.702 [2024-11-26 21:07:15.528955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.528983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.529122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4086670 ']' 00:26:24.702 [2024-11-26 21:07:15.529166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.702 [2024-11-26 21:07:15.529346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.529377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.702 [2024-11-26 21:07:15.529508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.529536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.702 [2024-11-26 21:07:15.529671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.529707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.702 [2024-11-26 21:07:15.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.529851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.702 [2024-11-26 21:07:15.529989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.530119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.530321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.530492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.530681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.530912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.530938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.531898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.532850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.533896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.533926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.534057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.534084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.702 [2024-11-26 21:07:15.534210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.702 [2024-11-26 21:07:15.534236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.702 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.534356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.534396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.534565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.534594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.534730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.534759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.534862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.534890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.535841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.535999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.536187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.536355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.536544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.536715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.536904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.536934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.537928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.537955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.538932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.703 [2024-11-26 21:07:15.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.703 qpair failed and we were unable to recover it. 00:26:24.703 [2024-11-26 21:07:15.539137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.539289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.539423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.539562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.539707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.539942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.540125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.540151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.540306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.540457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.540488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.540649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.540675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.540875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.540905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.541090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.541251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.541446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.541653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.541850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.541977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.542932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.542959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.543121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.543148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.543285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.543312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.543503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.543674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.543727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.704 qpair failed and we were unable to recover it. 00:26:24.704 [2024-11-26 21:07:15.543901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.704 [2024-11-26 21:07:15.543929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.544920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.544947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.545910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.545942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.546947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.546974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.547904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.547932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.548879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.548907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.705 [2024-11-26 21:07:15.549863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.705 [2024-11-26 21:07:15.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.705 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.550873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.550900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.551873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.551899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.552896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.552925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.553860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.553886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.554964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.554991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.706 [2024-11-26 21:07:15.555922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.706 [2024-11-26 21:07:15.555950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.706 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.556929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.556957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.557870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.557897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.558876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.558902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.559860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.559997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.560962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.561886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.561914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.707 qpair failed and we were unable to recover it. 00:26:24.707 [2024-11-26 21:07:15.562047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.707 [2024-11-26 21:07:15.562075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.562210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.562238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.562376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.562403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.562514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.562542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.562699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.562740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.562882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.562911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.563842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.563868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.564948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.564976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.565958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.565988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.566948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.566975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.567080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.567107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.567249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.567276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.708 qpair failed and we were unable to recover it. 00:26:24.708 [2024-11-26 21:07:15.567384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.708 [2024-11-26 21:07:15.567411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.567518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.567547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.567694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.567831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.567868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.568843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.568870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.569004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.569031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.569194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.709 [2024-11-26 21:07:15.569331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.709 [2024-11-26 21:07:15.569359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.709 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.569493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.569521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.982 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.569634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.569660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.982 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.569804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.982 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.569910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.569937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.982 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.570046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.570073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.982 qpair failed and we were unable to recover it. 00:26:24.982 [2024-11-26 21:07:15.570204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.982 [2024-11-26 21:07:15.570231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.570337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.570363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.570477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.570518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.570635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.570664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.570810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.570839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.570948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.570975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.571889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.571916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.572104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.572457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.572594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.572760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.572984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.573948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.573975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.983 qpair failed and we were unable to recover it. 00:26:24.983 [2024-11-26 21:07:15.574823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.983 [2024-11-26 21:07:15.574850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.574980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.575936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.575964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.576923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.576951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.577888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.577916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.578865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.578892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.579027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.579055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.579182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.579209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.579363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.579403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.984 [2024-11-26 21:07:15.579552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.984 [2024-11-26 21:07:15.579580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.984 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.579684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.579721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.579867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.579895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.580923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.580951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.581915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.581944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.582903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.582999] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:26:24.985 [2024-11-26 21:07:15.583047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 [2024-11-26 21:07:15.583085] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:26:24.985 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.985 [2024-11-26 21:07:15.583213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.583344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.583480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.583656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.583842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.583871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.985 [2024-11-26 21:07:15.583996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.985 [2024-11-26 21:07:15.584036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.985 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.584212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.584241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.584380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.584409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.584544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.584571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.584700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.584728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.584868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.585962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.585992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.586951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.586979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.587925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.587953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.588090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.588117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.588257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.588285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.588437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.588464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.588597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.986 [2024-11-26 21:07:15.588626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.986 qpair failed and we were unable to recover it. 00:26:24.986 [2024-11-26 21:07:15.588766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.588794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.588931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.588959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.589942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.590895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.590922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.591897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.591926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.592850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.592999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.987 [2024-11-26 21:07:15.593808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.987 qpair failed and we were unable to recover it. 00:26:24.987 [2024-11-26 21:07:15.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.594914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.594940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.595879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.595906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.596886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.596913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.597966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.597993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.598926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.598954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.988 [2024-11-26 21:07:15.599114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.988 [2024-11-26 21:07:15.599142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.988 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.599276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.599304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.599466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.599643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.599682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.599809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.599839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.600988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.601131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.601298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.601462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.601650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.601849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.601992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.602943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.602972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.603922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.603949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.604941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.604968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.605100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.605132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.605299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.605327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.989 [2024-11-26 21:07:15.605465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.989 [2024-11-26 21:07:15.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.989 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.605638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.605665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.605778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.605806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.605945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.605971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.606919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.606946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.607937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.607963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.608878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.608905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.609884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.609910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.610037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.610162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.610188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.610323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.990 [2024-11-26 21:07:15.610350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.990 qpair failed and we were unable to recover it. 00:26:24.990 [2024-11-26 21:07:15.610483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.610509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.610610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.610636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.610934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.610960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.611952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.612930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.612958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.613919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.613950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.614109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.614270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.614421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.614550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.991 [2024-11-26 21:07:15.614693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.991 qpair failed and we were unable to recover it. 00:26:24.991 [2024-11-26 21:07:15.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.614832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.614968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.614995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.615867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.615894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.616869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.616980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.617960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.617989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.618884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.618911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.619951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.619978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.620125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.620152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.620271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.992 [2024-11-26 21:07:15.620299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-11-26 21:07:15.620442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.620471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.620615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.620642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.620764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.620791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.620951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.620978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.621889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.621916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.622859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.622885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.623846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.623873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.624962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.624989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.625141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.625167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.625295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.625461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.625488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-11-26 21:07:15.625625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.993 [2024-11-26 21:07:15.625653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.625834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.625862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.625990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.626957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.626984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.627890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.627917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef0000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.628843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.628871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.629011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.629039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.629161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.629189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.629346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.629374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.629485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.629514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.629626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.629654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.633709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.633745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.633879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.633912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.634083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.634115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.634248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.634279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.634423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.634454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.634600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.634630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-11-26 21:07:15.634752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.994 [2024-11-26 21:07:15.634783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.634900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.634936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.635910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.635949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.636944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.636972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.637122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.637275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.637530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.637682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.637861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.637989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.638877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.638997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.639025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.639168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.639197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.995 qpair failed and we were unable to recover it. 00:26:24.995 [2024-11-26 21:07:15.639369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.995 [2024-11-26 21:07:15.639398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.639544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.639573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.639715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.639745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.639977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.640153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.640338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.640487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.640694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.640878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.640909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.644713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.644749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.644908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.644941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.645099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.645132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.645284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.645315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.645549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.645582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.645738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.645774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.645929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.645962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.646107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.646137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.646284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.646314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.646462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.646491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.646632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.646663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.646892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.646923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.647904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.647946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.648090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.648119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.648242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.648271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.648498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.648528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.648668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.648704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.648842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.648872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.649033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.649062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.649182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.649211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.649336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.996 [2024-11-26 21:07:15.649366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.996 qpair failed and we were unable to recover it. 00:26:24.996 [2024-11-26 21:07:15.649515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.649553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.649704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.649739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.649859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.650851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.650878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.651943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.651970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.652872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.652899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.653864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.653893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.654833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.654985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.655017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.655155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.655185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.655327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.658700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.997 [2024-11-26 21:07:15.658736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.997 qpair failed and we were unable to recover it. 00:26:24.997 [2024-11-26 21:07:15.658890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.658923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.659925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.659957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.660098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.660129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.660237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.660264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.660495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.660526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.660701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.660741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.660852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.660880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.661884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.661913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.662901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.662930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.663885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.998 qpair failed and we were unable to recover it. 00:26:24.998 [2024-11-26 21:07:15.664000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.998 [2024-11-26 21:07:15.664026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.664944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.664971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.665962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.665989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.666848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.666880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.667944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.999 [2024-11-26 21:07:15.667971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:24.999 qpair failed and we were unable to recover it. 00:26:24.999 [2024-11-26 21:07:15.668106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.668272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.668448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.668613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.668785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.668960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.668987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.669882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.669907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.670951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.670978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.000 [2024-11-26 21:07:15.671529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.671844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.671973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.000 [2024-11-26 21:07:15.672896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.000 [2024-11-26 21:07:15.672924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.000 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.673848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.673996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.674962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.674994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.675939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.675967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.676850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.676877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.677959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.677986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.678917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.001 [2024-11-26 21:07:15.678942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.001 qpair failed and we were unable to recover it. 00:26:25.001 [2024-11-26 21:07:15.679058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.679927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.680901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.680928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.681876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.681908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.682136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.682169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.682320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.682519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.682553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.682702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.682734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.682909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.682941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.683093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.684702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.684755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.684939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.684973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.685126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.685158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.685279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.685312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.685462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.685867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.685899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.686071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.686101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.686277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.686308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.686422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.686453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.686628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.686660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.686841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.686872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.687018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.002 [2024-11-26 21:07:15.687048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.002 qpair failed and we were unable to recover it. 00:26:25.002 [2024-11-26 21:07:15.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.687226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.689700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.689924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.689957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.690142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.690331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.690491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.690730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.690886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.690995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.691139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.691279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.691430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.691615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feef8000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 A controller has encountered a failure and is being reset. 00:26:25.003 [2024-11-26 21:07:15.691824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.691866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feeec000b90 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.692940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.692968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x637fa0 with addr=10.0.0.2, port=4420 00:26:25.003 qpair failed and we were unable to recover it. 00:26:25.003 [2024-11-26 21:07:15.693131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.003 [2024-11-26 21:07:15.693169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x645f30 with addr=10.0.0.2, port=4420 00:26:25.003 [2024-11-26 21:07:15.693189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x645f30 is same with the state(6) to be set 00:26:25.003 [2024-11-26 21:07:15.693217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x645f30 (9): Bad file descriptor 00:26:25.003 [2024-11-26 21:07:15.693238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:25.003 [2024-11-26 21:07:15.693253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:25.003 [2024-11-26 21:07:15.693271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:25.003 Unable to reset the controller. 00:26:25.003 [2024-11-26 21:07:15.735756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.003 [2024-11-26 21:07:15.735814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.003 [2024-11-26 21:07:15.735829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.003 [2024-11-26 21:07:15.735841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.003 [2024-11-26 21:07:15.735852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.003 [2024-11-26 21:07:15.737486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:25.003 [2024-11-26 21:07:15.737543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:25.003 [2024-11-26 21:07:15.737593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:25.003 [2024-11-26 21:07:15.737597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:25.003 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.003 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:25.003 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.003 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.003 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.004 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.004 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.004 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.004 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 Malloc0 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 [2024-11-26 21:07:15.922723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 [2024-11-26 21:07:15.951047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.262 21:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4086254 00:26:26.194 Controller properly reset. 00:26:31.502 Initializing NVMe Controllers 00:26:31.502 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:31.502 Initialization complete. Launching workers. 00:26:31.502 Starting thread on core 1 00:26:31.502 Starting thread on core 2 00:26:31.502 Starting thread on core 3 00:26:31.502 Starting thread on core 0 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:31.502 00:26:31.502 real 0m10.651s 00:26:31.502 user 0m33.386s 00:26:31.502 sys 0m7.515s 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.502 ************************************ 00:26:31.502 END TEST nvmf_target_disconnect_tc2 00:26:31.502 ************************************ 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.502 rmmod nvme_tcp 00:26:31.502 rmmod nvme_fabrics 00:26:31.502 rmmod nvme_keyring 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4086670 ']' 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4086670 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4086670 ']' 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 4086670 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4086670 00:26:31.502 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:31.503 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:31.503 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4086670' 00:26:31.503 killing process with pid 4086670 00:26:31.503 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 4086670 00:26:31.503 21:07:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 4086670 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.503 21:07:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.405 00:26:33.405 real 0m15.623s 00:26:33.405 user 0m58.728s 00:26:33.405 sys 0m10.121s 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:33.405 ************************************ 00:26:33.405 END TEST nvmf_target_disconnect 00:26:33.405 ************************************ 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:33.405 00:26:33.405 real 5m8.624s 00:26:33.405 user 11m11.836s 00:26:33.405 sys 1m14.805s 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.405 21:07:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.405 ************************************ 00:26:33.405 END TEST nvmf_host 00:26:33.405 ************************************ 00:26:33.405 21:07:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:33.405 21:07:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:33.405 21:07:24 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:33.405 21:07:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:33.405 21:07:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.405 21:07:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.405 ************************************ 00:26:33.405 START TEST nvmf_target_core_interrupt_mode 00:26:33.405 ************************************ 00:26:33.405 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:33.405 * Looking for test storage... 00:26:33.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:33.405 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.405 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.405 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.666 --rc genhtml_branch_coverage=1 00:26:33.666 --rc genhtml_function_coverage=1 00:26:33.666 --rc genhtml_legend=1 00:26:33.666 --rc geninfo_all_blocks=1 00:26:33.666 --rc geninfo_unexecuted_blocks=1 00:26:33.666 00:26:33.666 ' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.666 --rc genhtml_branch_coverage=1 00:26:33.666 --rc genhtml_function_coverage=1 00:26:33.666 --rc genhtml_legend=1 00:26:33.666 --rc geninfo_all_blocks=1 00:26:33.666 --rc geninfo_unexecuted_blocks=1 00:26:33.666 00:26:33.666 ' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.666 --rc genhtml_branch_coverage=1 00:26:33.666 --rc genhtml_function_coverage=1 00:26:33.666 --rc genhtml_legend=1 00:26:33.666 --rc geninfo_all_blocks=1 00:26:33.666 --rc geninfo_unexecuted_blocks=1 00:26:33.666 00:26:33.666 ' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.666 --rc genhtml_branch_coverage=1 00:26:33.666 --rc genhtml_function_coverage=1 00:26:33.666 --rc genhtml_legend=1 00:26:33.666 --rc geninfo_all_blocks=1 00:26:33.666 --rc geninfo_unexecuted_blocks=1 00:26:33.666 00:26:33.666 ' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.666 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:33.667 ************************************ 00:26:33.667 START TEST nvmf_abort 00:26:33.667 ************************************ 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:33.667 * Looking for test storage... 00:26:33.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.667 --rc genhtml_branch_coverage=1 00:26:33.667 --rc genhtml_function_coverage=1 00:26:33.667 --rc genhtml_legend=1 00:26:33.667 --rc geninfo_all_blocks=1 00:26:33.667 --rc geninfo_unexecuted_blocks=1 00:26:33.667 00:26:33.667 ' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.667 --rc genhtml_branch_coverage=1 00:26:33.667 --rc genhtml_function_coverage=1 00:26:33.667 --rc genhtml_legend=1 00:26:33.667 --rc geninfo_all_blocks=1 00:26:33.667 --rc geninfo_unexecuted_blocks=1 00:26:33.667 00:26:33.667 ' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.667 --rc genhtml_branch_coverage=1 00:26:33.667 --rc genhtml_function_coverage=1 00:26:33.667 --rc genhtml_legend=1 00:26:33.667 --rc geninfo_all_blocks=1 00:26:33.667 --rc geninfo_unexecuted_blocks=1 00:26:33.667 00:26:33.667 ' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.667 --rc genhtml_branch_coverage=1 00:26:33.667 --rc genhtml_function_coverage=1 00:26:33.667 --rc genhtml_legend=1 00:26:33.667 --rc geninfo_all_blocks=1 00:26:33.667 --rc geninfo_unexecuted_blocks=1 00:26:33.667 00:26:33.667 ' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.667 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.668 21:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.204 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:36.205 00:26:36.205 --- 10.0.0.2 ping statistics --- 00:26:36.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.205 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:26:36.205 00:26:36.205 --- 10.0.0.1 ping statistics --- 00:26:36.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.205 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4089476 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4089476 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4089476 ']' 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.205 21:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.205 [2024-11-26 21:07:26.839055] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:36.205 [2024-11-26 21:07:26.840084] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:26:36.205 [2024-11-26 21:07:26.840148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.205 [2024-11-26 21:07:26.911619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.205 [2024-11-26 21:07:26.969927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.205 [2024-11-26 21:07:26.970015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.205 [2024-11-26 21:07:26.970029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.205 [2024-11-26 21:07:26.970041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.205 [2024-11-26 21:07:26.970050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.205 [2024-11-26 21:07:26.971644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.205 [2024-11-26 21:07:26.971672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.205 [2024-11-26 21:07:26.971676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.205 [2024-11-26 21:07:27.067176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:36.205 [2024-11-26 21:07:27.067370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:36.205 [2024-11-26 21:07:27.067383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:36.205 [2024-11-26 21:07:27.067644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.205 [2024-11-26 21:07:27.116477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.205 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.490 Malloc0 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.490 Delay0 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.490 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-11-26 21:07:27.192620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.491 21:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:36.491 [2024-11-26 21:07:27.303826] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:39.011 Initializing NVMe Controllers 00:26:39.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:39.011 controller IO queue size 128 less than required 00:26:39.011 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:39.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:39.011 Initialization complete. Launching workers. 00:26:39.011 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28940 00:26:39.011 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28997, failed to submit 66 00:26:39.011 success 28940, unsuccessful 57, failed 0 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.011 rmmod nvme_tcp 00:26:39.011 rmmod nvme_fabrics 00:26:39.011 rmmod nvme_keyring 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4089476 ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4089476 ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4089476' 00:26:39.011 killing process with pid 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4089476 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.011 21:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.910 00:26:40.910 real 0m7.381s 00:26:40.910 user 0m9.297s 00:26:40.910 sys 0m2.911s 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:40.910 ************************************ 00:26:40.910 END TEST nvmf_abort 00:26:40.910 ************************************ 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.910 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:41.169 ************************************ 00:26:41.169 START TEST nvmf_ns_hotplug_stress 00:26:41.169 ************************************ 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:41.169 * Looking for test storage... 00:26:41.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.169 --rc genhtml_branch_coverage=1 00:26:41.169 --rc genhtml_function_coverage=1 00:26:41.169 --rc genhtml_legend=1 00:26:41.169 --rc geninfo_all_blocks=1 00:26:41.169 --rc geninfo_unexecuted_blocks=1 00:26:41.169 00:26:41.169 ' 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.169 --rc genhtml_branch_coverage=1 00:26:41.169 --rc genhtml_function_coverage=1 00:26:41.169 --rc genhtml_legend=1 00:26:41.169 --rc geninfo_all_blocks=1 00:26:41.169 --rc geninfo_unexecuted_blocks=1 00:26:41.169 00:26:41.169 ' 00:26:41.169 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.169 --rc genhtml_branch_coverage=1 00:26:41.169 --rc genhtml_function_coverage=1 00:26:41.169 --rc genhtml_legend=1 00:26:41.169 --rc geninfo_all_blocks=1 00:26:41.169 --rc geninfo_unexecuted_blocks=1 00:26:41.169 00:26:41.169 ' 00:26:41.170 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.170 --rc genhtml_branch_coverage=1 00:26:41.170 --rc genhtml_function_coverage=1 00:26:41.170 --rc genhtml_legend=1 00:26:41.170 --rc geninfo_all_blocks=1 00:26:41.170 --rc geninfo_unexecuted_blocks=1 00:26:41.170 00:26:41.170 ' 00:26:41.170 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.170 21:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.170 21:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.071 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:43.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:43.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:43.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:43.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.072 21:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:26:43.331 00:26:43.331 --- 10.0.0.2 ping statistics --- 00:26:43.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.331 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:26:43.331 00:26:43.331 --- 10.0.0.1 ping statistics --- 00:26:43.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.331 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4091711 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4091711 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4091711 ']' 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.331 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:43.331 [2024-11-26 21:07:34.153681] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:43.331 [2024-11-26 21:07:34.154797] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:26:43.331 [2024-11-26 21:07:34.154859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.331 [2024-11-26 21:07:34.239574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.589 [2024-11-26 21:07:34.310361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.589 [2024-11-26 21:07:34.310424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.589 [2024-11-26 21:07:34.310458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.589 [2024-11-26 21:07:34.310479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.589 [2024-11-26 21:07:34.310509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.589 [2024-11-26 21:07:34.312411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.589 [2024-11-26 21:07:34.312480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.589 [2024-11-26 21:07:34.312474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.589 [2024-11-26 21:07:34.410949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:43.589 [2024-11-26 21:07:34.411175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:43.589 [2024-11-26 21:07:34.411211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:43.589 [2024-11-26 21:07:34.411540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:43.590 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:43.849 [2024-11-26 21:07:34.709236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.849 21:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:44.414 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.414 [2024-11-26 21:07:35.345640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.673 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:44.931 21:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:45.189 Malloc0 00:26:45.189 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:45.447 Delay0 00:26:45.447 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.705 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:46.270 NULL1 00:26:46.270 21:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:46.527 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4092113 00:26:46.527 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:46.527 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:46.528 21:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.460 Read completed with error (sct=0, sc=11) 00:26:47.718 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:47.977 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:47.977 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:48.234 true 00:26:48.234 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:48.234 21:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.165 21:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.166 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:49.166 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:49.423 true 00:26:49.423 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:49.423 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.989 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.247 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:50.247 21:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:50.505 true 00:26:50.505 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:50.505 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.763 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.020 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:51.020 21:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:51.278 true 00:26:51.278 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:51.278 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.536 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.794 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:51.794 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:52.051 true 00:26:52.051 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:52.051 21:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:52.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.983 21:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.240 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:53.240 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:53.497 true 00:26:53.497 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:53.497 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.755 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.013 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:54.013 21:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:54.270 true 00:26:54.270 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:54.270 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.528 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.786 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:54.786 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:55.044 true 00:26:55.044 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:55.044 21:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.977 21:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.234 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:56.234 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:56.491 true 00:26:56.492 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:56.492 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.749 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.315 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:57.315 21:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:57.315 true 00:26:57.315 21:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:57.315 21:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.247 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.505 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:58.505 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:58.762 true 00:26:58.762 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:58.762 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.020 21:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.278 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:59.278 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:59.536 true 00:26:59.536 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:26:59.536 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.794 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.051 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:00.051 21:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:00.309 true 00:27:00.567 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:00.567 21:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.499 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.499 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:01.499 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:01.756 true 00:27:02.014 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:02.014 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.272 21:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.529 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:02.529 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:02.788 true 00:27:02.788 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:02.788 21:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.722 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.722 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:03.722 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:03.980 true 00:27:03.980 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:03.980 21:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.238 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.494 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:04.494 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:04.751 true 00:27:04.751 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:04.751 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.009 21:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.267 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:05.267 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:05.525 true 00:27:05.525 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:05.525 21:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.973 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.973 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:06.973 21:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:07.231 true 00:27:07.231 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:07.231 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.489 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.747 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:07.747 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:08.005 true 00:27:08.005 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:08.005 21:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.263 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.520 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:08.520 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:08.778 true 00:27:08.778 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:08.778 21:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.711 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.969 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:09.969 21:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:10.226 true 00:27:10.226 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:10.226 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.483 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.741 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:10.741 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:10.999 true 00:27:10.999 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:10.999 21:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.257 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.515 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:11.515 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:11.773 true 00:27:11.773 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:11.773 21:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.146 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.146 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:13.146 21:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:13.404 true 00:27:13.404 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:13.404 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.662 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.920 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:13.920 21:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:14.178 true 00:27:14.178 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:14.178 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.435 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.692 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:14.692 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:14.949 true 00:27:14.949 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:14.949 21:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.880 21:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:16.446 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:16.446 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:16.446 true 00:27:16.446 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:16.446 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.704 Initializing NVMe Controllers 00:27:16.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.704 Controller IO queue size 128, less than required. 00:27:16.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:16.704 Controller IO queue size 128, less than required. 00:27:16.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:16.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:16.704 Initialization complete. Launching workers. 00:27:16.704 ======================================================== 00:27:16.704 Latency(us) 00:27:16.704 Device Information : IOPS MiB/s Average min max 00:27:16.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 486.41 0.24 106603.07 3210.77 1029219.54 00:27:16.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8451.73 4.13 15146.17 2653.02 362032.19 00:27:16.704 ======================================================== 00:27:16.704 Total : 8938.15 4.36 20123.26 2653.02 1029219.54 00:27:16.704 00:27:16.704 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.962 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:16.962 21:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:17.526 true 00:27:17.526 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4092113 00:27:17.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4092113) - No such process 00:27:17.526 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4092113 00:27:17.526 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.526 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:17.783 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:17.783 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:17.783 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:17.783 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:17.783 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:18.041 null0 00:27:18.041 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:18.041 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:18.041 21:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:18.299 null1 00:27:18.557 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:18.557 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:18.557 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:18.813 null2 00:27:18.814 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:18.814 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:18.814 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:19.071 null3 00:27:19.071 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:19.071 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.071 21:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:19.330 null4 00:27:19.330 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:19.330 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.330 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:19.588 null5 00:27:19.588 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:19.588 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.588 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:19.846 null6 00:27:19.846 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:19.846 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.846 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:20.105 null7 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:20.105 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4096136 4096137 4096139 4096140 4096143 4096145 4096147 4096149 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.106 21:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:20.365 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:20.624 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:20.883 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:21.141 21:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.400 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:21.658 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.916 21:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:22.175 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.434 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.002 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.260 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:23.261 21:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:23.519 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:23.519 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:23.519 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:23.519 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:23.520 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:23.520 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:23.520 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.520 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:23.778 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.779 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:24.038 21:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.296 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:24.554 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:24.812 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.812 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.812 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.069 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.070 21:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:25.327 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.586 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:25.844 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.103 21:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.103 rmmod nvme_tcp 00:27:26.103 rmmod nvme_fabrics 00:27:26.103 rmmod nvme_keyring 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4091711 ']' 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4091711 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4091711 ']' 00:27:26.103 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4091711 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091711 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091711' 00:27:26.362 killing process with pid 4091711 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4091711 00:27:26.362 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4091711 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.621 21:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.527 00:27:28.527 real 0m47.525s 00:27:28.527 user 3m18.885s 00:27:28.527 sys 0m21.700s 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:28.527 ************************************ 00:27:28.527 END TEST nvmf_ns_hotplug_stress 00:27:28.527 ************************************ 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:28.527 ************************************ 00:27:28.527 START TEST nvmf_delete_subsystem 00:27:28.527 ************************************ 00:27:28.527 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:28.787 * Looking for test storage... 00:27:28.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:28.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.787 --rc genhtml_branch_coverage=1 00:27:28.787 --rc genhtml_function_coverage=1 00:27:28.787 --rc genhtml_legend=1 00:27:28.787 --rc geninfo_all_blocks=1 00:27:28.787 --rc geninfo_unexecuted_blocks=1 00:27:28.787 00:27:28.787 ' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:28.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.787 --rc genhtml_branch_coverage=1 00:27:28.787 --rc genhtml_function_coverage=1 00:27:28.787 --rc genhtml_legend=1 00:27:28.787 --rc geninfo_all_blocks=1 00:27:28.787 --rc geninfo_unexecuted_blocks=1 00:27:28.787 00:27:28.787 ' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:28.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.787 --rc genhtml_branch_coverage=1 00:27:28.787 --rc genhtml_function_coverage=1 00:27:28.787 --rc genhtml_legend=1 00:27:28.787 --rc geninfo_all_blocks=1 00:27:28.787 --rc geninfo_unexecuted_blocks=1 00:27:28.787 00:27:28.787 ' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:28.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.787 --rc genhtml_branch_coverage=1 00:27:28.787 --rc genhtml_function_coverage=1 00:27:28.787 --rc genhtml_legend=1 00:27:28.787 --rc geninfo_all_blocks=1 00:27:28.787 --rc geninfo_unexecuted_blocks=1 00:27:28.787 00:27:28.787 ' 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.787 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.788 21:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.693 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.694 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.694 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.695 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:27:30.954 00:27:30.954 --- 10.0.0.2 ping statistics --- 00:27:30.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.954 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:27:30.954 00:27:30.954 --- 10.0.0.1 ping statistics --- 00:27:30.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.954 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:30.954 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4099025 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4099025 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4099025 ']' 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.955 21:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:30.955 [2024-11-26 21:08:21.842592] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:30.955 [2024-11-26 21:08:21.843770] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:27:30.955 [2024-11-26 21:08:21.843830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.213 [2024-11-26 21:08:21.915773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:31.213 [2024-11-26 21:08:21.972729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.213 [2024-11-26 21:08:21.972800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.213 [2024-11-26 21:08:21.972815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.213 [2024-11-26 21:08:21.972826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.213 [2024-11-26 21:08:21.972837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.213 [2024-11-26 21:08:21.974297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.213 [2024-11-26 21:08:21.974304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.213 [2024-11-26 21:08:22.064077] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:31.213 [2024-11-26 21:08:22.064114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:31.213 [2024-11-26 21:08:22.064358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 [2024-11-26 21:08:22.119004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 [2024-11-26 21:08:22.135265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 NULL1 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.214 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.472 Delay0 00:27:31.472 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4099055 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:31.473 21:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:31.473 [2024-11-26 21:08:22.212070] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:33.371 21:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.371 21:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.371 21:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 [2024-11-26 21:08:24.431963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae680 is same with the state(6) to be set 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 Read completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.630 starting I/O failed: -6 00:27:33.630 Write completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Write completed with error (sct=0, sc=8) 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:33.631 Read completed with error (sct=0, sc=8) 00:27:33.631 starting I/O failed: -6 00:27:34.621 [2024-11-26 21:08:25.394264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af9b0 is same with the state(6) to be set 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 [2024-11-26 21:08:25.436973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae2c0 is same with the state(6) to be set 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 [2024-11-26 21:08:25.437255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4a0 is same with the state(6) to be set 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Write completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.621 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 [2024-11-26 21:08:25.437466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae860 is same with the state(6) to be set 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Write completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 Read completed with error (sct=0, sc=8) 00:27:34.622 [2024-11-26 21:08:25.438698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa45800d350 is same with the state(6) to be set 00:27:34.622 Initializing NVMe Controllers 00:27:34.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.622 Controller IO queue size 128, less than required. 00:27:34.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:34.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:34.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:34.622 Initialization complete. Launching workers. 00:27:34.622 ======================================================== 00:27:34.622 Latency(us) 00:27:34.622 Device Information : IOPS MiB/s Average min max 00:27:34.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.70 0.09 961067.58 941.75 1013269.24 00:27:34.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.74 0.08 869780.45 405.78 1013970.47 00:27:34.622 ======================================================== 00:27:34.622 Total : 347.44 0.17 916206.48 405.78 1013970.47 00:27:34.622 00:27:34.622 [2024-11-26 21:08:25.439227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9af9b0 (9): Bad file descriptor 00:27:34.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:34.622 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.622 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:34.622 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4099055 00:27:34.622 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4099055 00:27:35.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4099055) - No such process 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4099055 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4099055 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4099055 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 [2024-11-26 21:08:25.959176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4099457 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:35.189 21:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:35.189 [2024-11-26 21:08:26.018032] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:35.754 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:35.754 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:35.754 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:36.317 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:36.317 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:36.317 21:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:36.574 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:36.574 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:36.574 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:37.138 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:37.138 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:37.138 21:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:37.703 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:37.703 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:37.703 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:38.268 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:38.268 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:38.269 21:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:38.525 Initializing NVMe Controllers 00:27:38.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.525 Controller IO queue size 128, less than required. 00:27:38.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:38.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:38.525 Initialization complete. Launching workers. 00:27:38.525 ======================================================== 00:27:38.525 Latency(us) 00:27:38.525 Device Information : IOPS MiB/s Average min max 00:27:38.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006353.03 1000225.29 1045624.64 00:27:38.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005815.42 1000197.39 1045770.46 00:27:38.525 ======================================================== 00:27:38.525 Total : 256.00 0.12 1006084.23 1000197.39 1045770.46 00:27:38.525 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4099457 00:27:38.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4099457) - No such process 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4099457 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.784 rmmod nvme_tcp 00:27:38.784 rmmod nvme_fabrics 00:27:38.784 rmmod nvme_keyring 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4099025 ']' 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4099025 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4099025 ']' 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4099025 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4099025 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4099025' 00:27:38.784 killing process with pid 4099025 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4099025 00:27:38.784 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4099025 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.043 21:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.946 00:27:40.946 real 0m12.415s 00:27:40.946 user 0m24.934s 00:27:40.946 sys 0m3.680s 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.946 ************************************ 00:27:40.946 END TEST nvmf_delete_subsystem 00:27:40.946 ************************************ 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.946 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:41.205 ************************************ 00:27:41.205 START TEST nvmf_host_management 00:27:41.205 ************************************ 00:27:41.205 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:41.205 * Looking for test storage... 00:27:41.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:41.205 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:41.205 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:41.205 21:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:41.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.205 --rc genhtml_branch_coverage=1 00:27:41.205 --rc genhtml_function_coverage=1 00:27:41.205 --rc genhtml_legend=1 00:27:41.205 --rc geninfo_all_blocks=1 00:27:41.205 --rc geninfo_unexecuted_blocks=1 00:27:41.205 00:27:41.205 ' 00:27:41.205 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:41.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.205 --rc genhtml_branch_coverage=1 00:27:41.205 --rc genhtml_function_coverage=1 00:27:41.205 --rc genhtml_legend=1 00:27:41.205 --rc geninfo_all_blocks=1 00:27:41.206 --rc geninfo_unexecuted_blocks=1 00:27:41.206 00:27:41.206 ' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.206 --rc genhtml_branch_coverage=1 00:27:41.206 --rc genhtml_function_coverage=1 00:27:41.206 --rc genhtml_legend=1 00:27:41.206 --rc geninfo_all_blocks=1 00:27:41.206 --rc geninfo_unexecuted_blocks=1 00:27:41.206 00:27:41.206 ' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:41.206 --rc genhtml_branch_coverage=1 00:27:41.206 --rc genhtml_function_coverage=1 00:27:41.206 --rc genhtml_legend=1 00:27:41.206 --rc geninfo_all_blocks=1 00:27:41.206 --rc geninfo_unexecuted_blocks=1 00:27:41.206 00:27:41.206 ' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.206 21:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.107 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.107 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.107 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.108 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.108 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:27:43.366 00:27:43.366 --- 10.0.0.2 ping statistics --- 00:27:43.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.366 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:43.366 00:27:43.366 --- 10.0.0.1 ping statistics --- 00:27:43.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.366 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4101909 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4101909 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4101909 ']' 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.366 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.366 [2024-11-26 21:08:34.253374] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:43.366 [2024-11-26 21:08:34.254495] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:27:43.366 [2024-11-26 21:08:34.254553] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.624 [2024-11-26 21:08:34.333946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.624 [2024-11-26 21:08:34.394021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.624 [2024-11-26 21:08:34.394092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.624 [2024-11-26 21:08:34.394105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.624 [2024-11-26 21:08:34.394116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.624 [2024-11-26 21:08:34.394125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.624 [2024-11-26 21:08:34.395657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.624 [2024-11-26 21:08:34.395761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.624 [2024-11-26 21:08:34.395765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.624 [2024-11-26 21:08:34.395721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.624 [2024-11-26 21:08:34.494901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:43.624 [2024-11-26 21:08:34.495117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:43.624 [2024-11-26 21:08:34.495413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:43.624 [2024-11-26 21:08:34.496124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:43.624 [2024-11-26 21:08:34.496399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.624 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 [2024-11-26 21:08:34.548560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.882 Malloc0 00:27:43.882 [2024-11-26 21:08:34.616737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4101962 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4101962 /var/tmp/bdevperf.sock 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4101962 ']' 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:43.882 { 00:27:43.882 "params": { 00:27:43.882 "name": "Nvme$subsystem", 00:27:43.882 "trtype": "$TEST_TRANSPORT", 00:27:43.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.882 "adrfam": "ipv4", 00:27:43.882 "trsvcid": "$NVMF_PORT", 00:27:43.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.882 "hdgst": ${hdgst:-false}, 00:27:43.882 "ddgst": ${ddgst:-false} 00:27:43.882 }, 00:27:43.882 "method": "bdev_nvme_attach_controller" 00:27:43.882 } 00:27:43.882 EOF 00:27:43.882 )") 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:43.882 21:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:43.882 "params": { 00:27:43.882 "name": "Nvme0", 00:27:43.882 "trtype": "tcp", 00:27:43.882 "traddr": "10.0.0.2", 00:27:43.882 "adrfam": "ipv4", 00:27:43.882 "trsvcid": "4420", 00:27:43.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.882 "hdgst": false, 00:27:43.882 "ddgst": false 00:27:43.882 }, 00:27:43.882 "method": "bdev_nvme_attach_controller" 00:27:43.882 }' 00:27:43.882 [2024-11-26 21:08:34.702488] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:27:43.882 [2024-11-26 21:08:34.702578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101962 ] 00:27:43.882 [2024-11-26 21:08:34.772500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.140 [2024-11-26 21:08:34.832241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.140 Running I/O for 10 seconds... 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:44.397 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.656 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:44.656 [2024-11-26 21:08:35.440830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.440881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.656 [2024-11-26 21:08:35.440911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.440929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.656 [2024-11-26 21:08:35.440945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.440961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.656 [2024-11-26 21:08:35.440999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.441014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.656 [2024-11-26 21:08:35.441030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.441043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.656 [2024-11-26 21:08:35.441059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.656 [2024-11-26 21:08:35.441072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.441970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.441993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.657 [2024-11-26 21:08:35.442294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.657 [2024-11-26 21:08:35.442309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.442869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.658 [2024-11-26 21:08:35.442883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.443074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.658 [2024-11-26 21:08:35.443098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.443114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.658 [2024-11-26 21:08:35.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.443146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.658 [2024-11-26 21:08:35.443161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.443175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:44.658 [2024-11-26 21:08:35.443188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.443201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76fa50 is same with the state(6) to be set 00:27:44.658 [2024-11-26 21:08:35.444363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:44.658 task offset: 78848 on job bdev=Nvme0n1 fails 00:27:44.658 00:27:44.658 Latency(us) 00:27:44.658 [2024-11-26T20:08:35.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.658 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.658 Job: Nvme0n1 ended in about 0.39 seconds with error 00:27:44.658 Verification LBA range: start 0x0 length 0x400 00:27:44.658 Nvme0n1 : 0.39 1477.66 92.35 164.18 0.00 37854.45 2961.26 35729.26 00:27:44.658 [2024-11-26T20:08:35.596Z] =================================================================================================================== 00:27:44.658 [2024-11-26T20:08:35.596Z] Total : 1477.66 92.35 164.18 0.00 37854.45 2961.26 35729.26 00:27:44.658 [2024-11-26 21:08:35.446271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:44.658 [2024-11-26 21:08:35.446310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76fa50 (9): Bad file descriptor 00:27:44.658 [2024-11-26 21:08:35.447543] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:44.658 [2024-11-26 21:08:35.447707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:44.658 [2024-11-26 21:08:35.447735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.658 [2024-11-26 21:08:35.447763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:44.658 [2024-11-26 21:08:35.447780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:44.658 [2024-11-26 21:08:35.447793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.658 [2024-11-26 21:08:35.447806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x76fa50 00:27:44.658 [2024-11-26 21:08:35.447840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76fa50 (9): Bad file descriptor 00:27:44.658 [2024-11-26 21:08:35.447865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:44.658 [2024-11-26 21:08:35.447880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:44.658 [2024-11-26 21:08:35.447902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:44.658 [2024-11-26 21:08:35.447917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.658 21:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4101962 00:27:45.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4101962) - No such process 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.589 { 00:27:45.589 "params": { 00:27:45.589 "name": "Nvme$subsystem", 00:27:45.589 "trtype": "$TEST_TRANSPORT", 00:27:45.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.589 "adrfam": "ipv4", 00:27:45.589 "trsvcid": "$NVMF_PORT", 00:27:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.589 "hdgst": ${hdgst:-false}, 00:27:45.589 "ddgst": ${ddgst:-false} 00:27:45.589 }, 00:27:45.589 "method": "bdev_nvme_attach_controller" 00:27:45.589 } 00:27:45.589 EOF 00:27:45.589 )") 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:45.589 21:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:45.589 "params": { 00:27:45.589 "name": "Nvme0", 00:27:45.589 "trtype": "tcp", 00:27:45.589 "traddr": "10.0.0.2", 00:27:45.589 "adrfam": "ipv4", 00:27:45.589 "trsvcid": "4420", 00:27:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.589 "hdgst": false, 00:27:45.589 "ddgst": false 00:27:45.589 }, 00:27:45.589 "method": "bdev_nvme_attach_controller" 00:27:45.589 }' 00:27:45.589 [2024-11-26 21:08:36.504275] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:27:45.589 [2024-11-26 21:08:36.504348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102221 ] 00:27:45.847 [2024-11-26 21:08:36.573909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.847 [2024-11-26 21:08:36.633756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.105 Running I/O for 1 seconds... 00:27:47.480 1557.00 IOPS, 97.31 MiB/s 00:27:47.480 Latency(us) 00:27:47.480 [2024-11-26T20:08:38.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.480 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme0n1 : 1.05 1539.74 96.23 0.00 0.00 39200.02 3568.07 44273.21 00:27:47.480 [2024-11-26T20:08:38.418Z] =================================================================================================================== 00:27:47.480 [2024-11-26T20:08:38.418Z] Total : 1539.74 96.23 0.00 0.00 39200.02 3568.07 44273.21 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.480 rmmod nvme_tcp 00:27:47.480 rmmod nvme_fabrics 00:27:47.480 rmmod nvme_keyring 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4101909 ']' 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4101909 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4101909 ']' 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4101909 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4101909 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4101909' 00:27:47.480 killing process with pid 4101909 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4101909 00:27:47.480 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4101909 00:27:47.738 [2024-11-26 21:08:38.593097] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:47.738 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.739 21:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.273 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.273 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:50.273 00:27:50.273 real 0m8.772s 00:27:50.273 user 0m17.796s 00:27:50.273 sys 0m3.646s 00:27:50.273 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.273 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.273 ************************************ 00:27:50.273 END TEST nvmf_host_management 00:27:50.273 ************************************ 00:27:50.273 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:50.274 ************************************ 00:27:50.274 START TEST nvmf_lvol 00:27:50.274 ************************************ 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:50.274 * Looking for test storage... 00:27:50.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:50.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.274 --rc genhtml_branch_coverage=1 00:27:50.274 --rc genhtml_function_coverage=1 00:27:50.274 --rc genhtml_legend=1 00:27:50.274 --rc geninfo_all_blocks=1 00:27:50.274 --rc geninfo_unexecuted_blocks=1 00:27:50.274 00:27:50.274 ' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:50.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.274 --rc genhtml_branch_coverage=1 00:27:50.274 --rc genhtml_function_coverage=1 00:27:50.274 --rc genhtml_legend=1 00:27:50.274 --rc geninfo_all_blocks=1 00:27:50.274 --rc geninfo_unexecuted_blocks=1 00:27:50.274 00:27:50.274 ' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:50.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.274 --rc genhtml_branch_coverage=1 00:27:50.274 --rc genhtml_function_coverage=1 00:27:50.274 --rc genhtml_legend=1 00:27:50.274 --rc geninfo_all_blocks=1 00:27:50.274 --rc geninfo_unexecuted_blocks=1 00:27:50.274 00:27:50.274 ' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:50.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.274 --rc genhtml_branch_coverage=1 00:27:50.274 --rc genhtml_function_coverage=1 00:27:50.274 --rc genhtml_legend=1 00:27:50.274 --rc geninfo_all_blocks=1 00:27:50.274 --rc geninfo_unexecuted_blocks=1 00:27:50.274 00:27:50.274 ' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.274 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.275 21:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.175 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.176 21:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.176 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.435 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.435 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.435 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:27:52.435 00:27:52.435 --- 10.0.0.2 ping statistics --- 00:27:52.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.435 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:52.435 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:52.435 00:27:52.435 --- 10.0.0.1 ping statistics --- 00:27:52.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.436 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4104319 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4104319 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4104319 ']' 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.436 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:52.436 [2024-11-26 21:08:43.201054] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:52.436 [2024-11-26 21:08:43.202123] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:27:52.436 [2024-11-26 21:08:43.202177] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.436 [2024-11-26 21:08:43.276137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:52.436 [2024-11-26 21:08:43.334156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.436 [2024-11-26 21:08:43.334213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.436 [2024-11-26 21:08:43.334243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.436 [2024-11-26 21:08:43.334255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.436 [2024-11-26 21:08:43.334271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.436 [2024-11-26 21:08:43.335774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.436 [2024-11-26 21:08:43.335802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.436 [2024-11-26 21:08:43.335805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.695 [2024-11-26 21:08:43.426679] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:52.695 [2024-11-26 21:08:43.426892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:52.695 [2024-11-26 21:08:43.426899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:52.695 [2024-11-26 21:08:43.427197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.695 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:52.953 [2024-11-26 21:08:43.728584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.953 21:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:53.211 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:53.211 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:53.469 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:53.469 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:53.727 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:54.294 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f459b34a-bc07-4a81-be74-712febf28a65 00:27:54.294 21:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f459b34a-bc07-4a81-be74-712febf28a65 lvol 20 00:27:54.294 21:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bc638c98-b03e-4362-aef4-821cfa5cde42 00:27:54.294 21:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:54.552 21:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc638c98-b03e-4362-aef4-821cfa5cde42 00:27:55.117 21:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.117 [2024-11-26 21:08:46.012805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.117 21:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.375 21:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4104740 00:27:55.375 21:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:55.375 21:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:56.749 21:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bc638c98-b03e-4362-aef4-821cfa5cde42 MY_SNAPSHOT 00:27:56.749 21:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=301fb633-8d21-4d9c-b7df-a5e5bf871acd 00:27:56.750 21:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bc638c98-b03e-4362-aef4-821cfa5cde42 30 00:27:57.007 21:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 301fb633-8d21-4d9c-b7df-a5e5bf871acd MY_CLONE 00:27:57.574 21:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3ebbd8d5-ec81-435b-b98d-a18ebd821a79 00:27:57.574 21:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3ebbd8d5-ec81-435b-b98d-a18ebd821a79 00:27:58.139 21:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4104740 00:28:06.247 Initializing NVMe Controllers 00:28:06.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:06.247 Controller IO queue size 128, less than required. 00:28:06.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:06.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:06.247 Initialization complete. Launching workers. 00:28:06.247 ======================================================== 00:28:06.247 Latency(us) 00:28:06.247 Device Information : IOPS MiB/s Average min max 00:28:06.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10677.61 41.71 11987.76 3079.83 72856.39 00:28:06.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10568.42 41.28 12117.33 2090.16 77948.71 00:28:06.248 ======================================================== 00:28:06.248 Total : 21246.03 82.99 12052.21 2090.16 77948.71 00:28:06.248 00:28:06.248 21:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.248 21:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bc638c98-b03e-4362-aef4-821cfa5cde42 00:28:06.506 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f459b34a-bc07-4a81-be74-712febf28a65 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.764 rmmod nvme_tcp 00:28:06.764 rmmod nvme_fabrics 00:28:06.764 rmmod nvme_keyring 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4104319 ']' 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4104319 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4104319 ']' 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4104319 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4104319 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4104319' 00:28:06.764 killing process with pid 4104319 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4104319 00:28:06.764 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4104319 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.330 21:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.236 00:28:09.236 real 0m19.286s 00:28:09.236 user 0m56.776s 00:28:09.236 sys 0m7.580s 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.236 ************************************ 00:28:09.236 END TEST nvmf_lvol 00:28:09.236 ************************************ 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:09.236 ************************************ 00:28:09.236 START TEST nvmf_lvs_grow 00:28:09.236 ************************************ 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:09.236 * Looking for test storage... 00:28:09.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.236 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.495 --rc genhtml_branch_coverage=1 00:28:09.495 --rc genhtml_function_coverage=1 00:28:09.495 --rc genhtml_legend=1 00:28:09.495 --rc geninfo_all_blocks=1 00:28:09.495 --rc geninfo_unexecuted_blocks=1 00:28:09.495 00:28:09.495 ' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.495 --rc genhtml_branch_coverage=1 00:28:09.495 --rc genhtml_function_coverage=1 00:28:09.495 --rc genhtml_legend=1 00:28:09.495 --rc geninfo_all_blocks=1 00:28:09.495 --rc geninfo_unexecuted_blocks=1 00:28:09.495 00:28:09.495 ' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.495 --rc genhtml_branch_coverage=1 00:28:09.495 --rc genhtml_function_coverage=1 00:28:09.495 --rc genhtml_legend=1 00:28:09.495 --rc geninfo_all_blocks=1 00:28:09.495 --rc geninfo_unexecuted_blocks=1 00:28:09.495 00:28:09.495 ' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.495 --rc genhtml_branch_coverage=1 00:28:09.495 --rc genhtml_function_coverage=1 00:28:09.495 --rc genhtml_legend=1 00:28:09.495 --rc geninfo_all_blocks=1 00:28:09.495 --rc geninfo_unexecuted_blocks=1 00:28:09.495 00:28:09.495 ' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.495 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.496 21:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.396 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:28:11.397 00:28:11.397 --- 10.0.0.2 ping statistics --- 00:28:11.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.397 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:28:11.397 00:28:11.397 --- 10.0.0.1 ping statistics --- 00:28:11.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.397 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.397 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4108113 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4108113 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4108113 ']' 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.656 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.656 [2024-11-26 21:09:02.384078] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:11.656 [2024-11-26 21:09:02.385084] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:11.656 [2024-11-26 21:09:02.385147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.656 [2024-11-26 21:09:02.458959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.656 [2024-11-26 21:09:02.524265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.656 [2024-11-26 21:09:02.524339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.656 [2024-11-26 21:09:02.524353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.656 [2024-11-26 21:09:02.524365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.656 [2024-11-26 21:09:02.524375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.656 [2024-11-26 21:09:02.525025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.915 [2024-11-26 21:09:02.621862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.915 [2024-11-26 21:09:02.622184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.915 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:12.173 [2024-11-26 21:09:02.941671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:12.173 ************************************ 00:28:12.173 START TEST lvs_grow_clean 00:28:12.173 ************************************ 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:12.173 21:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:12.431 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:12.432 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:12.690 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:12.690 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:12.690 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:12.963 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:12.963 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:12.963 21:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e36b8e31-05fa-4133-ab3b-90117d0f6561 lvol 150 00:28:13.262 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=63e774e5-62ba-4d8e-b9f8-cd72fc05819a 00:28:13.262 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:13.262 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:13.540 [2024-11-26 21:09:04.405577] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:13.540 [2024-11-26 21:09:04.405741] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:13.540 true 00:28:13.540 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:13.540 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:13.800 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:13.800 21:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:14.366 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63e774e5-62ba-4d8e-b9f8-cd72fc05819a 00:28:14.366 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.625 [2024-11-26 21:09:05.557933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.883 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4108553 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4108553 /var/tmp/bdevperf.sock 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4108553 ']' 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.141 21:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.141 [2024-11-26 21:09:05.899444] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:15.141 [2024-11-26 21:09:05.899535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108553 ] 00:28:15.141 [2024-11-26 21:09:05.970051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.141 [2024-11-26 21:09:06.032165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.399 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.399 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:15.399 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:15.657 Nvme0n1 00:28:15.915 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:16.173 [ 00:28:16.173 { 00:28:16.173 "name": "Nvme0n1", 00:28:16.173 "aliases": [ 00:28:16.173 "63e774e5-62ba-4d8e-b9f8-cd72fc05819a" 00:28:16.173 ], 00:28:16.173 "product_name": "NVMe disk", 00:28:16.173 "block_size": 4096, 00:28:16.173 "num_blocks": 38912, 00:28:16.173 "uuid": "63e774e5-62ba-4d8e-b9f8-cd72fc05819a", 00:28:16.173 "numa_id": 0, 00:28:16.173 "assigned_rate_limits": { 00:28:16.173 "rw_ios_per_sec": 0, 00:28:16.173 "rw_mbytes_per_sec": 0, 00:28:16.173 "r_mbytes_per_sec": 0, 00:28:16.173 "w_mbytes_per_sec": 0 00:28:16.173 }, 00:28:16.173 "claimed": false, 00:28:16.173 "zoned": false, 00:28:16.173 "supported_io_types": { 00:28:16.173 "read": true, 00:28:16.173 "write": true, 00:28:16.173 "unmap": true, 00:28:16.173 "flush": true, 00:28:16.173 "reset": true, 00:28:16.173 "nvme_admin": true, 00:28:16.173 "nvme_io": true, 00:28:16.173 "nvme_io_md": false, 00:28:16.173 "write_zeroes": true, 00:28:16.173 "zcopy": false, 00:28:16.173 "get_zone_info": false, 00:28:16.173 "zone_management": false, 00:28:16.173 "zone_append": false, 00:28:16.173 "compare": true, 00:28:16.173 "compare_and_write": true, 00:28:16.173 "abort": true, 00:28:16.173 "seek_hole": false, 00:28:16.173 "seek_data": false, 00:28:16.173 "copy": true, 00:28:16.173 "nvme_iov_md": false 00:28:16.173 }, 00:28:16.173 "memory_domains": [ 00:28:16.173 { 00:28:16.173 "dma_device_id": "system", 00:28:16.173 "dma_device_type": 1 00:28:16.173 } 00:28:16.173 ], 00:28:16.173 "driver_specific": { 00:28:16.173 "nvme": [ 00:28:16.173 { 00:28:16.173 "trid": { 00:28:16.173 "trtype": "TCP", 00:28:16.173 "adrfam": "IPv4", 00:28:16.173 "traddr": "10.0.0.2", 00:28:16.173 "trsvcid": "4420", 00:28:16.173 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:16.173 }, 00:28:16.173 "ctrlr_data": { 00:28:16.173 "cntlid": 1, 00:28:16.173 "vendor_id": "0x8086", 00:28:16.173 "model_number": "SPDK bdev Controller", 00:28:16.173 "serial_number": "SPDK0", 00:28:16.173 "firmware_revision": "25.01", 00:28:16.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.173 "oacs": { 00:28:16.173 "security": 0, 00:28:16.173 "format": 0, 00:28:16.173 "firmware": 0, 00:28:16.173 "ns_manage": 0 00:28:16.173 }, 00:28:16.173 "multi_ctrlr": true, 00:28:16.173 "ana_reporting": false 00:28:16.173 }, 00:28:16.173 "vs": { 00:28:16.173 "nvme_version": "1.3" 00:28:16.173 }, 00:28:16.173 "ns_data": { 00:28:16.173 "id": 1, 00:28:16.173 "can_share": true 00:28:16.173 } 00:28:16.173 } 00:28:16.173 ], 00:28:16.173 "mp_policy": "active_passive" 00:28:16.173 } 00:28:16.173 } 00:28:16.173 ] 00:28:16.173 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4108690 00:28:16.173 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:16.173 21:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:16.173 Running I/O for 10 seconds... 00:28:17.547 Latency(us) 00:28:17.547 [2024-11-26T20:09:08.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.547 Nvme0n1 : 1.00 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:28:17.547 [2024-11-26T20:09:08.485Z] =================================================================================================================== 00:28:17.547 [2024-11-26T20:09:08.485Z] Total : 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:28:17.547 00:28:18.113 21:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:18.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:18.373 Nvme0n1 : 2.00 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:28:18.373 [2024-11-26T20:09:09.311Z] =================================================================================================================== 00:28:18.373 [2024-11-26T20:09:09.311Z] Total : 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:28:18.373 00:28:18.373 true 00:28:18.373 21:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:18.373 21:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:18.631 21:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:18.631 21:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:18.631 21:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4108690 00:28:19.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.197 Nvme0n1 : 3.00 14308.67 55.89 0.00 0.00 0.00 0.00 0.00 00:28:19.197 [2024-11-26T20:09:10.135Z] =================================================================================================================== 00:28:19.197 [2024-11-26T20:09:10.135Z] Total : 14308.67 55.89 0.00 0.00 0.00 0.00 0.00 00:28:19.197 00:28:20.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:20.131 Nvme0n1 : 4.00 14509.75 56.68 0.00 0.00 0.00 0.00 0.00 00:28:20.131 [2024-11-26T20:09:11.069Z] =================================================================================================================== 00:28:20.131 [2024-11-26T20:09:11.069Z] Total : 14509.75 56.68 0.00 0.00 0.00 0.00 0.00 00:28:20.131 00:28:21.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.502 Nvme0n1 : 5.00 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:21.502 [2024-11-26T20:09:12.440Z] =================================================================================================================== 00:28:21.502 [2024-11-26T20:09:12.440Z] Total : 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:21.502 00:28:22.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:22.437 Nvme0n1 : 6.00 14467.50 56.51 0.00 0.00 0.00 0.00 0.00 00:28:22.437 [2024-11-26T20:09:13.375Z] =================================================================================================================== 00:28:22.437 [2024-11-26T20:09:13.375Z] Total : 14467.50 56.51 0.00 0.00 0.00 0.00 0.00 00:28:22.437 00:28:23.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.369 Nvme0n1 : 7.00 14459.86 56.48 0.00 0.00 0.00 0.00 0.00 00:28:23.369 [2024-11-26T20:09:14.307Z] =================================================================================================================== 00:28:23.369 [2024-11-26T20:09:14.307Z] Total : 14459.86 56.48 0.00 0.00 0.00 0.00 0.00 00:28:23.369 00:28:24.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.302 Nvme0n1 : 8.00 14470.12 56.52 0.00 0.00 0.00 0.00 0.00 00:28:24.302 [2024-11-26T20:09:15.240Z] =================================================================================================================== 00:28:24.302 [2024-11-26T20:09:15.240Z] Total : 14470.12 56.52 0.00 0.00 0.00 0.00 0.00 00:28:24.302 00:28:25.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.234 Nvme0n1 : 9.00 14478.11 56.56 0.00 0.00 0.00 0.00 0.00 00:28:25.234 [2024-11-26T20:09:16.172Z] =================================================================================================================== 00:28:25.234 [2024-11-26T20:09:16.172Z] Total : 14478.11 56.56 0.00 0.00 0.00 0.00 0.00 00:28:25.234 00:28:26.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.166 Nvme0n1 : 10.00 14471.70 56.53 0.00 0.00 0.00 0.00 0.00 00:28:26.166 [2024-11-26T20:09:17.104Z] =================================================================================================================== 00:28:26.166 [2024-11-26T20:09:17.104Z] Total : 14471.70 56.53 0.00 0.00 0.00 0.00 0.00 00:28:26.166 00:28:26.166 00:28:26.166 Latency(us) 00:28:26.166 [2024-11-26T20:09:17.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.166 Nvme0n1 : 10.01 14470.49 56.53 0.00 0.00 8840.47 5558.42 18641.35 00:28:26.166 [2024-11-26T20:09:17.104Z] =================================================================================================================== 00:28:26.166 [2024-11-26T20:09:17.104Z] Total : 14470.49 56.53 0.00 0.00 8840.47 5558.42 18641.35 00:28:26.166 { 00:28:26.166 "results": [ 00:28:26.166 { 00:28:26.166 "job": "Nvme0n1", 00:28:26.166 "core_mask": "0x2", 00:28:26.166 "workload": "randwrite", 00:28:26.166 "status": "finished", 00:28:26.166 "queue_depth": 128, 00:28:26.166 "io_size": 4096, 00:28:26.166 "runtime": 10.00968, 00:28:26.166 "iops": 14470.492563198824, 00:28:26.166 "mibps": 56.52536157499541, 00:28:26.166 "io_failed": 0, 00:28:26.166 "io_timeout": 0, 00:28:26.166 "avg_latency_us": 8840.468256115413, 00:28:26.166 "min_latency_us": 5558.423703703704, 00:28:26.166 "max_latency_us": 18641.35111111111 00:28:26.166 } 00:28:26.166 ], 00:28:26.166 "core_count": 1 00:28:26.166 } 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4108553 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4108553 ']' 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4108553 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.166 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4108553 00:28:26.423 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:26.423 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:26.423 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4108553' 00:28:26.423 killing process with pid 4108553 00:28:26.423 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4108553 00:28:26.423 Received shutdown signal, test time was about 10.000000 seconds 00:28:26.423 00:28:26.423 Latency(us) 00:28:26.423 [2024-11-26T20:09:17.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.423 [2024-11-26T20:09:17.361Z] =================================================================================================================== 00:28:26.423 [2024-11-26T20:09:17.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.423 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4108553 00:28:26.681 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.940 21:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:27.198 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:27.198 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:27.456 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:27.456 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:27.456 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:27.714 [2024-11-26 21:09:18.577621] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:27.714 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:27.972 request: 00:28:27.972 { 00:28:27.972 "uuid": "e36b8e31-05fa-4133-ab3b-90117d0f6561", 00:28:27.972 "method": "bdev_lvol_get_lvstores", 00:28:27.972 "req_id": 1 00:28:27.972 } 00:28:27.972 Got JSON-RPC error response 00:28:27.972 response: 00:28:27.972 { 00:28:27.972 "code": -19, 00:28:27.972 "message": "No such device" 00:28:27.972 } 00:28:27.972 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:27.972 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:27.972 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:27.972 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:27.972 21:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:28.230 aio_bdev 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 63e774e5-62ba-4d8e-b9f8-cd72fc05819a 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=63e774e5-62ba-4d8e-b9f8-cd72fc05819a 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:28.489 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:28.747 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 63e774e5-62ba-4d8e-b9f8-cd72fc05819a -t 2000 00:28:29.005 [ 00:28:29.005 { 00:28:29.005 "name": "63e774e5-62ba-4d8e-b9f8-cd72fc05819a", 00:28:29.005 "aliases": [ 00:28:29.005 "lvs/lvol" 00:28:29.005 ], 00:28:29.005 "product_name": "Logical Volume", 00:28:29.005 "block_size": 4096, 00:28:29.005 "num_blocks": 38912, 00:28:29.005 "uuid": "63e774e5-62ba-4d8e-b9f8-cd72fc05819a", 00:28:29.005 "assigned_rate_limits": { 00:28:29.005 "rw_ios_per_sec": 0, 00:28:29.005 "rw_mbytes_per_sec": 0, 00:28:29.005 "r_mbytes_per_sec": 0, 00:28:29.005 "w_mbytes_per_sec": 0 00:28:29.005 }, 00:28:29.005 "claimed": false, 00:28:29.005 "zoned": false, 00:28:29.005 "supported_io_types": { 00:28:29.005 "read": true, 00:28:29.005 "write": true, 00:28:29.005 "unmap": true, 00:28:29.005 "flush": false, 00:28:29.005 "reset": true, 00:28:29.005 "nvme_admin": false, 00:28:29.005 "nvme_io": false, 00:28:29.005 "nvme_io_md": false, 00:28:29.005 "write_zeroes": true, 00:28:29.005 "zcopy": false, 00:28:29.005 "get_zone_info": false, 00:28:29.005 "zone_management": false, 00:28:29.005 "zone_append": false, 00:28:29.005 "compare": false, 00:28:29.005 "compare_and_write": false, 00:28:29.005 "abort": false, 00:28:29.005 "seek_hole": true, 00:28:29.005 "seek_data": true, 00:28:29.005 "copy": false, 00:28:29.005 "nvme_iov_md": false 00:28:29.005 }, 00:28:29.005 "driver_specific": { 00:28:29.005 "lvol": { 00:28:29.005 "lvol_store_uuid": "e36b8e31-05fa-4133-ab3b-90117d0f6561", 00:28:29.005 "base_bdev": "aio_bdev", 00:28:29.005 "thin_provision": false, 00:28:29.005 "num_allocated_clusters": 38, 00:28:29.005 "snapshot": false, 00:28:29.005 "clone": false, 00:28:29.005 "esnap_clone": false 00:28:29.005 } 00:28:29.005 } 00:28:29.005 } 00:28:29.005 ] 00:28:29.005 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:29.005 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:29.005 21:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:29.263 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:29.263 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:29.263 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:29.521 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:29.521 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 63e774e5-62ba-4d8e-b9f8-cd72fc05819a 00:28:29.780 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e36b8e31-05fa-4133-ab3b-90117d0f6561 00:28:30.038 21:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:30.296 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:30.555 00:28:30.555 real 0m18.262s 00:28:30.555 user 0m17.955s 00:28:30.555 sys 0m1.815s 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.555 ************************************ 00:28:30.555 END TEST lvs_grow_clean 00:28:30.555 ************************************ 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:30.555 ************************************ 00:28:30.555 START TEST lvs_grow_dirty 00:28:30.555 ************************************ 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:30.555 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:30.814 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:30.814 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:31.073 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:31.073 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:31.073 21:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:31.332 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:31.332 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:31.332 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68fdcbb3-c021-4faf-863d-eeaf26995611 lvol 150 00:28:31.590 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:31.590 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:31.590 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:31.848 [2024-11-26 21:09:22.737577] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:31.848 [2024-11-26 21:09:22.737722] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:31.848 true 00:28:31.848 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:31.849 21:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:32.415 21:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:32.415 21:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:32.415 21:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:32.673 21:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.932 [2024-11-26 21:09:23.866002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.191 21:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.450 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4111221 00:28:33.450 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4111221 /var/tmp/bdevperf.sock 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4111221 ']' 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.451 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:33.451 [2024-11-26 21:09:24.255053] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:33.451 [2024-11-26 21:09:24.255146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111221 ] 00:28:33.451 [2024-11-26 21:09:24.321717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.451 [2024-11-26 21:09:24.381215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.709 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.709 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:33.709 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:33.967 Nvme0n1 00:28:33.967 21:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:34.225 [ 00:28:34.225 { 00:28:34.225 "name": "Nvme0n1", 00:28:34.225 "aliases": [ 00:28:34.225 "9317eac4-5c4f-4fcd-b54b-617c4c58457e" 00:28:34.225 ], 00:28:34.225 "product_name": "NVMe disk", 00:28:34.225 "block_size": 4096, 00:28:34.225 "num_blocks": 38912, 00:28:34.225 "uuid": "9317eac4-5c4f-4fcd-b54b-617c4c58457e", 00:28:34.225 "numa_id": 0, 00:28:34.225 "assigned_rate_limits": { 00:28:34.225 "rw_ios_per_sec": 0, 00:28:34.225 "rw_mbytes_per_sec": 0, 00:28:34.225 "r_mbytes_per_sec": 0, 00:28:34.225 "w_mbytes_per_sec": 0 00:28:34.225 }, 00:28:34.225 "claimed": false, 00:28:34.225 "zoned": false, 00:28:34.225 "supported_io_types": { 00:28:34.225 "read": true, 00:28:34.225 "write": true, 00:28:34.225 "unmap": true, 00:28:34.225 "flush": true, 00:28:34.225 "reset": true, 00:28:34.225 "nvme_admin": true, 00:28:34.225 "nvme_io": true, 00:28:34.225 "nvme_io_md": false, 00:28:34.225 "write_zeroes": true, 00:28:34.225 "zcopy": false, 00:28:34.225 "get_zone_info": false, 00:28:34.225 "zone_management": false, 00:28:34.225 "zone_append": false, 00:28:34.225 "compare": true, 00:28:34.225 "compare_and_write": true, 00:28:34.225 "abort": true, 00:28:34.225 "seek_hole": false, 00:28:34.225 "seek_data": false, 00:28:34.225 "copy": true, 00:28:34.225 "nvme_iov_md": false 00:28:34.225 }, 00:28:34.225 "memory_domains": [ 00:28:34.225 { 00:28:34.225 "dma_device_id": "system", 00:28:34.225 "dma_device_type": 1 00:28:34.225 } 00:28:34.225 ], 00:28:34.225 "driver_specific": { 00:28:34.225 "nvme": [ 00:28:34.225 { 00:28:34.225 "trid": { 00:28:34.225 "trtype": "TCP", 00:28:34.225 "adrfam": "IPv4", 00:28:34.225 "traddr": "10.0.0.2", 00:28:34.225 "trsvcid": "4420", 00:28:34.225 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.225 }, 00:28:34.225 "ctrlr_data": { 00:28:34.225 "cntlid": 1, 00:28:34.225 "vendor_id": "0x8086", 00:28:34.225 "model_number": "SPDK bdev Controller", 00:28:34.225 "serial_number": "SPDK0", 00:28:34.225 "firmware_revision": "25.01", 00:28:34.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.225 "oacs": { 00:28:34.225 "security": 0, 00:28:34.225 "format": 0, 00:28:34.225 "firmware": 0, 00:28:34.225 "ns_manage": 0 00:28:34.225 }, 00:28:34.225 "multi_ctrlr": true, 00:28:34.225 "ana_reporting": false 00:28:34.225 }, 00:28:34.225 "vs": { 00:28:34.225 "nvme_version": "1.3" 00:28:34.225 }, 00:28:34.225 "ns_data": { 00:28:34.225 "id": 1, 00:28:34.225 "can_share": true 00:28:34.225 } 00:28:34.225 } 00:28:34.225 ], 00:28:34.225 "mp_policy": "active_passive" 00:28:34.225 } 00:28:34.225 } 00:28:34.225 ] 00:28:34.225 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4111358 00:28:34.225 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:34.225 21:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:34.483 Running I/O for 10 seconds... 00:28:35.417 Latency(us) 00:28:35.417 [2024-11-26T20:09:26.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.417 Nvme0n1 : 1.00 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:28:35.417 [2024-11-26T20:09:26.355Z] =================================================================================================================== 00:28:35.417 [2024-11-26T20:09:26.355Z] Total : 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:28:35.417 00:28:36.351 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:36.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.351 Nvme0n1 : 2.00 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:28:36.351 [2024-11-26T20:09:27.289Z] =================================================================================================================== 00:28:36.351 [2024-11-26T20:09:27.289Z] Total : 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:28:36.351 00:28:36.616 true 00:28:36.616 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:36.616 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:36.875 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:36.875 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:36.875 21:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4111358 00:28:37.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.442 Nvme0n1 : 3.00 14054.67 54.90 0.00 0.00 0.00 0.00 0.00 00:28:37.442 [2024-11-26T20:09:28.380Z] =================================================================================================================== 00:28:37.442 [2024-11-26T20:09:28.380Z] Total : 14054.67 54.90 0.00 0.00 0.00 0.00 0.00 00:28:37.442 00:28:38.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.376 Nvme0n1 : 4.00 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:28:38.376 [2024-11-26T20:09:29.314Z] =================================================================================================================== 00:28:38.376 [2024-11-26T20:09:29.314Z] Total : 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:28:38.376 00:28:39.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:39.309 Nvme0n1 : 5.00 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:28:39.309 [2024-11-26T20:09:30.247Z] =================================================================================================================== 00:28:39.309 [2024-11-26T20:09:30.247Z] Total : 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:28:39.309 00:28:40.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.682 Nvme0n1 : 6.00 14150.00 55.27 0.00 0.00 0.00 0.00 0.00 00:28:40.682 [2024-11-26T20:09:31.620Z] =================================================================================================================== 00:28:40.682 [2024-11-26T20:09:31.620Z] Total : 14150.00 55.27 0.00 0.00 0.00 0.00 0.00 00:28:40.682 00:28:41.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.615 Nvme0n1 : 7.00 14190.14 55.43 0.00 0.00 0.00 0.00 0.00 00:28:41.615 [2024-11-26T20:09:32.553Z] =================================================================================================================== 00:28:41.615 [2024-11-26T20:09:32.553Z] Total : 14190.14 55.43 0.00 0.00 0.00 0.00 0.00 00:28:41.615 00:28:42.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.549 Nvme0n1 : 8.00 14234.12 55.60 0.00 0.00 0.00 0.00 0.00 00:28:42.549 [2024-11-26T20:09:33.487Z] =================================================================================================================== 00:28:42.549 [2024-11-26T20:09:33.487Z] Total : 14234.12 55.60 0.00 0.00 0.00 0.00 0.00 00:28:42.549 00:28:43.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.514 Nvme0n1 : 9.00 14268.22 55.74 0.00 0.00 0.00 0.00 0.00 00:28:43.514 [2024-11-26T20:09:34.452Z] =================================================================================================================== 00:28:43.514 [2024-11-26T20:09:34.452Z] Total : 14268.22 55.74 0.00 0.00 0.00 0.00 0.00 00:28:43.514 00:28:44.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.477 Nvme0n1 : 10.00 14276.50 55.77 0.00 0.00 0.00 0.00 0.00 00:28:44.477 [2024-11-26T20:09:35.415Z] =================================================================================================================== 00:28:44.477 [2024-11-26T20:09:35.415Z] Total : 14276.50 55.77 0.00 0.00 0.00 0.00 0.00 00:28:44.477 00:28:44.477 00:28:44.477 Latency(us) 00:28:44.477 [2024-11-26T20:09:35.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.477 Nvme0n1 : 10.01 14281.77 55.79 0.00 0.00 8957.60 4636.07 22233.69 00:28:44.477 [2024-11-26T20:09:35.415Z] =================================================================================================================== 00:28:44.477 [2024-11-26T20:09:35.415Z] Total : 14281.77 55.79 0.00 0.00 8957.60 4636.07 22233.69 00:28:44.477 { 00:28:44.477 "results": [ 00:28:44.477 { 00:28:44.477 "job": "Nvme0n1", 00:28:44.477 "core_mask": "0x2", 00:28:44.477 "workload": "randwrite", 00:28:44.477 "status": "finished", 00:28:44.477 "queue_depth": 128, 00:28:44.477 "io_size": 4096, 00:28:44.477 "runtime": 10.005275, 00:28:44.477 "iops": 14281.766368240753, 00:28:44.477 "mibps": 55.78814987594044, 00:28:44.477 "io_failed": 0, 00:28:44.478 "io_timeout": 0, 00:28:44.478 "avg_latency_us": 8957.59549794187, 00:28:44.478 "min_latency_us": 4636.065185185185, 00:28:44.478 "max_latency_us": 22233.694814814815 00:28:44.478 } 00:28:44.478 ], 00:28:44.478 "core_count": 1 00:28:44.478 } 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4111221 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4111221 ']' 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4111221 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4111221 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4111221' 00:28:44.478 killing process with pid 4111221 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4111221 00:28:44.478 Received shutdown signal, test time was about 10.000000 seconds 00:28:44.478 00:28:44.478 Latency(us) 00:28:44.478 [2024-11-26T20:09:35.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.478 [2024-11-26T20:09:35.416Z] =================================================================================================================== 00:28:44.478 [2024-11-26T20:09:35.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.478 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4111221 00:28:44.737 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.995 21:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:45.562 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:45.562 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:45.820 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:45.820 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:45.820 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4108113 00:28:45.820 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4108113 00:28:45.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4108113 Killed "${NVMF_APP[@]}" "$@" 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4112677 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4112677 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4112677 ']' 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.821 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:45.821 [2024-11-26 21:09:36.614548] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.821 [2024-11-26 21:09:36.615701] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:45.821 [2024-11-26 21:09:36.615780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.821 [2024-11-26 21:09:36.689695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.821 [2024-11-26 21:09:36.744914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.821 [2024-11-26 21:09:36.744975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.821 [2024-11-26 21:09:36.744988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.821 [2024-11-26 21:09:36.744999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.821 [2024-11-26 21:09:36.745009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.821 [2024-11-26 21:09:36.745567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.079 [2024-11-26 21:09:36.834083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.079 [2024-11-26 21:09:36.834430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.079 21:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:46.338 [2024-11-26 21:09:37.144248] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:46.338 [2024-11-26 21:09:37.144406] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:46.338 [2024-11-26 21:09:37.144464] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:46.338 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:46.596 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9317eac4-5c4f-4fcd-b54b-617c4c58457e -t 2000 00:28:46.854 [ 00:28:46.854 { 00:28:46.854 "name": "9317eac4-5c4f-4fcd-b54b-617c4c58457e", 00:28:46.854 "aliases": [ 00:28:46.854 "lvs/lvol" 00:28:46.854 ], 00:28:46.854 "product_name": "Logical Volume", 00:28:46.854 "block_size": 4096, 00:28:46.854 "num_blocks": 38912, 00:28:46.854 "uuid": "9317eac4-5c4f-4fcd-b54b-617c4c58457e", 00:28:46.854 "assigned_rate_limits": { 00:28:46.854 "rw_ios_per_sec": 0, 00:28:46.854 "rw_mbytes_per_sec": 0, 00:28:46.854 "r_mbytes_per_sec": 0, 00:28:46.854 "w_mbytes_per_sec": 0 00:28:46.854 }, 00:28:46.854 "claimed": false, 00:28:46.854 "zoned": false, 00:28:46.854 "supported_io_types": { 00:28:46.854 "read": true, 00:28:46.854 "write": true, 00:28:46.854 "unmap": true, 00:28:46.854 "flush": false, 00:28:46.854 "reset": true, 00:28:46.854 "nvme_admin": false, 00:28:46.854 "nvme_io": false, 00:28:46.854 "nvme_io_md": false, 00:28:46.854 "write_zeroes": true, 00:28:46.854 "zcopy": false, 00:28:46.854 "get_zone_info": false, 00:28:46.854 "zone_management": false, 00:28:46.854 "zone_append": false, 00:28:46.854 "compare": false, 00:28:46.854 "compare_and_write": false, 00:28:46.854 "abort": false, 00:28:46.854 "seek_hole": true, 00:28:46.854 "seek_data": true, 00:28:46.854 "copy": false, 00:28:46.854 "nvme_iov_md": false 00:28:46.854 }, 00:28:46.854 "driver_specific": { 00:28:46.854 "lvol": { 00:28:46.854 "lvol_store_uuid": "68fdcbb3-c021-4faf-863d-eeaf26995611", 00:28:46.854 "base_bdev": "aio_bdev", 00:28:46.854 "thin_provision": false, 00:28:46.854 "num_allocated_clusters": 38, 00:28:46.854 "snapshot": false, 00:28:46.854 "clone": false, 00:28:46.854 "esnap_clone": false 00:28:46.854 } 00:28:46.854 } 00:28:46.854 } 00:28:46.854 ] 00:28:46.854 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:46.854 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:46.854 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:47.113 21:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:47.113 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:47.113 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:47.371 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:47.371 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:47.629 [2024-11-26 21:09:38.530122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:47.629 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:47.629 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:47.629 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:47.629 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.629 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:47.630 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:48.196 request: 00:28:48.196 { 00:28:48.196 "uuid": "68fdcbb3-c021-4faf-863d-eeaf26995611", 00:28:48.196 "method": "bdev_lvol_get_lvstores", 00:28:48.196 "req_id": 1 00:28:48.196 } 00:28:48.196 Got JSON-RPC error response 00:28:48.196 response: 00:28:48.196 { 00:28:48.196 "code": -19, 00:28:48.196 "message": "No such device" 00:28:48.196 } 00:28:48.196 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:48.196 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:48.196 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:48.196 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:48.196 21:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:48.196 aio_bdev 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:48.196 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:48.761 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9317eac4-5c4f-4fcd-b54b-617c4c58457e -t 2000 00:28:49.018 [ 00:28:49.018 { 00:28:49.018 "name": "9317eac4-5c4f-4fcd-b54b-617c4c58457e", 00:28:49.018 "aliases": [ 00:28:49.018 "lvs/lvol" 00:28:49.018 ], 00:28:49.018 "product_name": "Logical Volume", 00:28:49.018 "block_size": 4096, 00:28:49.018 "num_blocks": 38912, 00:28:49.019 "uuid": "9317eac4-5c4f-4fcd-b54b-617c4c58457e", 00:28:49.019 "assigned_rate_limits": { 00:28:49.019 "rw_ios_per_sec": 0, 00:28:49.019 "rw_mbytes_per_sec": 0, 00:28:49.019 "r_mbytes_per_sec": 0, 00:28:49.019 "w_mbytes_per_sec": 0 00:28:49.019 }, 00:28:49.019 "claimed": false, 00:28:49.019 "zoned": false, 00:28:49.019 "supported_io_types": { 00:28:49.019 "read": true, 00:28:49.019 "write": true, 00:28:49.019 "unmap": true, 00:28:49.019 "flush": false, 00:28:49.019 "reset": true, 00:28:49.019 "nvme_admin": false, 00:28:49.019 "nvme_io": false, 00:28:49.019 "nvme_io_md": false, 00:28:49.019 "write_zeroes": true, 00:28:49.019 "zcopy": false, 00:28:49.019 "get_zone_info": false, 00:28:49.019 "zone_management": false, 00:28:49.019 "zone_append": false, 00:28:49.019 "compare": false, 00:28:49.019 "compare_and_write": false, 00:28:49.019 "abort": false, 00:28:49.019 "seek_hole": true, 00:28:49.019 "seek_data": true, 00:28:49.019 "copy": false, 00:28:49.019 "nvme_iov_md": false 00:28:49.019 }, 00:28:49.019 "driver_specific": { 00:28:49.019 "lvol": { 00:28:49.019 "lvol_store_uuid": "68fdcbb3-c021-4faf-863d-eeaf26995611", 00:28:49.019 "base_bdev": "aio_bdev", 00:28:49.019 "thin_provision": false, 00:28:49.019 "num_allocated_clusters": 38, 00:28:49.019 "snapshot": false, 00:28:49.019 "clone": false, 00:28:49.019 "esnap_clone": false 00:28:49.019 } 00:28:49.019 } 00:28:49.019 } 00:28:49.019 ] 00:28:49.019 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:49.019 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:49.019 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:49.276 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:49.276 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:49.276 21:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:49.535 21:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:49.535 21:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9317eac4-5c4f-4fcd-b54b-617c4c58457e 00:28:49.792 21:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68fdcbb3-c021-4faf-863d-eeaf26995611 00:28:50.050 21:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:50.308 00:28:50.308 real 0m19.886s 00:28:50.308 user 0m36.724s 00:28:50.308 sys 0m4.822s 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:50.308 ************************************ 00:28:50.308 END TEST lvs_grow_dirty 00:28:50.308 ************************************ 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:50.308 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:50.308 nvmf_trace.0 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.566 rmmod nvme_tcp 00:28:50.566 rmmod nvme_fabrics 00:28:50.566 rmmod nvme_keyring 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4112677 ']' 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4112677 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4112677 ']' 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4112677 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4112677 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4112677' 00:28:50.566 killing process with pid 4112677 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4112677 00:28:50.566 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4112677 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.824 21:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.726 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.726 00:28:52.726 real 0m43.581s 00:28:52.726 user 0m56.503s 00:28:52.726 sys 0m8.524s 00:28:52.727 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.727 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:52.727 ************************************ 00:28:52.727 END TEST nvmf_lvs_grow 00:28:52.727 ************************************ 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.986 ************************************ 00:28:52.986 START TEST nvmf_bdev_io_wait 00:28:52.986 ************************************ 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:52.986 * Looking for test storage... 00:28:52.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.986 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.986 --rc genhtml_branch_coverage=1 00:28:52.986 --rc genhtml_function_coverage=1 00:28:52.987 --rc genhtml_legend=1 00:28:52.987 --rc geninfo_all_blocks=1 00:28:52.987 --rc geninfo_unexecuted_blocks=1 00:28:52.987 00:28:52.987 ' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.987 --rc genhtml_branch_coverage=1 00:28:52.987 --rc genhtml_function_coverage=1 00:28:52.987 --rc genhtml_legend=1 00:28:52.987 --rc geninfo_all_blocks=1 00:28:52.987 --rc geninfo_unexecuted_blocks=1 00:28:52.987 00:28:52.987 ' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.987 --rc genhtml_branch_coverage=1 00:28:52.987 --rc genhtml_function_coverage=1 00:28:52.987 --rc genhtml_legend=1 00:28:52.987 --rc geninfo_all_blocks=1 00:28:52.987 --rc geninfo_unexecuted_blocks=1 00:28:52.987 00:28:52.987 ' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.987 --rc genhtml_branch_coverage=1 00:28:52.987 --rc genhtml_function_coverage=1 00:28:52.987 --rc genhtml_legend=1 00:28:52.987 --rc geninfo_all_blocks=1 00:28:52.987 --rc geninfo_unexecuted_blocks=1 00:28:52.987 00:28:52.987 ' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.987 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.988 21:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:55.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.528 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:55.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:55.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:55.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.529 21:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:28:55.529 00:28:55.529 --- 10.0.0.2 ping statistics --- 00:28:55.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.529 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:55.529 00:28:55.529 --- 10.0.0.1 ping statistics --- 00:28:55.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.529 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.529 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4115204 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4115204 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4115204 ']' 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.530 [2024-11-26 21:09:46.190449] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.530 [2024-11-26 21:09:46.191704] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:55.530 [2024-11-26 21:09:46.191778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.530 [2024-11-26 21:09:46.268894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.530 [2024-11-26 21:09:46.329770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.530 [2024-11-26 21:09:46.329827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.530 [2024-11-26 21:09:46.329867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.530 [2024-11-26 21:09:46.329879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.530 [2024-11-26 21:09:46.329889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.530 [2024-11-26 21:09:46.331670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.530 [2024-11-26 21:09:46.331705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.530 [2024-11-26 21:09:46.331730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.530 [2024-11-26 21:09:46.331735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.530 [2024-11-26 21:09:46.332351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.530 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.790 [2024-11-26 21:09:46.528626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:55.790 [2024-11-26 21:09:46.528874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:55.790 [2024-11-26 21:09:46.529834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:55.791 [2024-11-26 21:09:46.530682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.791 [2024-11-26 21:09:46.536603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.791 Malloc0 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:55.791 [2024-11-26 21:09:46.592776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4115347 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4115349 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4115351 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.791 { 00:28:55.791 "params": { 00:28:55.791 "name": "Nvme$subsystem", 00:28:55.791 "trtype": "$TEST_TRANSPORT", 00:28:55.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.791 "adrfam": "ipv4", 00:28:55.791 "trsvcid": "$NVMF_PORT", 00:28:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.791 "hdgst": ${hdgst:-false}, 00:28:55.791 "ddgst": ${ddgst:-false} 00:28:55.791 }, 00:28:55.791 "method": "bdev_nvme_attach_controller" 00:28:55.791 } 00:28:55.791 EOF 00:28:55.791 )") 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4115353 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.791 { 00:28:55.791 "params": { 00:28:55.791 "name": "Nvme$subsystem", 00:28:55.791 "trtype": "$TEST_TRANSPORT", 00:28:55.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.791 "adrfam": "ipv4", 00:28:55.791 "trsvcid": "$NVMF_PORT", 00:28:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.791 "hdgst": ${hdgst:-false}, 00:28:55.791 "ddgst": ${ddgst:-false} 00:28:55.791 }, 00:28:55.791 "method": "bdev_nvme_attach_controller" 00:28:55.791 } 00:28:55.791 EOF 00:28:55.791 )") 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:55.791 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.791 { 00:28:55.791 "params": { 00:28:55.791 "name": "Nvme$subsystem", 00:28:55.791 "trtype": "$TEST_TRANSPORT", 00:28:55.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.791 "adrfam": "ipv4", 00:28:55.791 "trsvcid": "$NVMF_PORT", 00:28:55.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.792 "hdgst": ${hdgst:-false}, 00:28:55.792 "ddgst": ${ddgst:-false} 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 } 00:28:55.792 EOF 00:28:55.792 )") 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.792 { 00:28:55.792 "params": { 00:28:55.792 "name": "Nvme$subsystem", 00:28:55.792 "trtype": "$TEST_TRANSPORT", 00:28:55.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.792 "adrfam": "ipv4", 00:28:55.792 "trsvcid": "$NVMF_PORT", 00:28:55.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.792 "hdgst": ${hdgst:-false}, 00:28:55.792 "ddgst": ${ddgst:-false} 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 } 00:28:55.792 EOF 00:28:55.792 )") 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4115347 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.792 "params": { 00:28:55.792 "name": "Nvme1", 00:28:55.792 "trtype": "tcp", 00:28:55.792 "traddr": "10.0.0.2", 00:28:55.792 "adrfam": "ipv4", 00:28:55.792 "trsvcid": "4420", 00:28:55.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.792 "hdgst": false, 00:28:55.792 "ddgst": false 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 }' 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.792 "params": { 00:28:55.792 "name": "Nvme1", 00:28:55.792 "trtype": "tcp", 00:28:55.792 "traddr": "10.0.0.2", 00:28:55.792 "adrfam": "ipv4", 00:28:55.792 "trsvcid": "4420", 00:28:55.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.792 "hdgst": false, 00:28:55.792 "ddgst": false 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 }' 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.792 "params": { 00:28:55.792 "name": "Nvme1", 00:28:55.792 "trtype": "tcp", 00:28:55.792 "traddr": "10.0.0.2", 00:28:55.792 "adrfam": "ipv4", 00:28:55.792 "trsvcid": "4420", 00:28:55.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.792 "hdgst": false, 00:28:55.792 "ddgst": false 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 }' 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:55.792 21:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.792 "params": { 00:28:55.792 "name": "Nvme1", 00:28:55.792 "trtype": "tcp", 00:28:55.792 "traddr": "10.0.0.2", 00:28:55.792 "adrfam": "ipv4", 00:28:55.792 "trsvcid": "4420", 00:28:55.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.792 "hdgst": false, 00:28:55.792 "ddgst": false 00:28:55.792 }, 00:28:55.792 "method": "bdev_nvme_attach_controller" 00:28:55.792 }' 00:28:55.792 [2024-11-26 21:09:46.645097] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:55.792 [2024-11-26 21:09:46.645098] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:55.792 [2024-11-26 21:09:46.645098] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:55.792 [2024-11-26 21:09:46.645115] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:28:55.792 [2024-11-26 21:09:46.645180] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 21:09:46.645180] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 21:09:46.645181] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:28:55.792 [2024-11-26 21:09:46.645197] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:28:55.792 --proc-type=auto ] 00:28:55.792 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:56.050 [2024-11-26 21:09:46.828071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.050 [2024-11-26 21:09:46.882149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.050 [2024-11-26 21:09:46.926650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.050 [2024-11-26 21:09:46.980117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.307 [2024-11-26 21:09:47.024691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.307 [2024-11-26 21:09:47.075959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.307 [2024-11-26 21:09:47.091219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.307 [2024-11-26 21:09:47.140340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.307 Running I/O for 1 seconds... 00:28:56.307 Running I/O for 1 seconds... 00:28:56.565 Running I/O for 1 seconds... 00:28:56.565 Running I/O for 1 seconds... 00:28:57.498 5857.00 IOPS, 22.88 MiB/s [2024-11-26T20:09:48.436Z] 132136.00 IOPS, 516.16 MiB/s 00:28:57.498 Latency(us) 00:28:57.498 [2024-11-26T20:09:48.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.498 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:57.498 Nvme1n1 : 1.00 131869.45 515.12 0.00 0.00 965.09 315.54 1990.35 00:28:57.498 [2024-11-26T20:09:48.436Z] =================================================================================================================== 00:28:57.498 [2024-11-26T20:09:48.436Z] Total : 131869.45 515.12 0.00 0.00 965.09 315.54 1990.35 00:28:57.498 00:28:57.498 Latency(us) 00:28:57.498 [2024-11-26T20:09:48.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.498 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:57.498 Nvme1n1 : 1.02 5853.43 22.86 0.00 0.00 21632.47 4830.25 28932.93 00:28:57.498 [2024-11-26T20:09:48.436Z] =================================================================================================================== 00:28:57.498 [2024-11-26T20:09:48.436Z] Total : 5853.43 22.86 0.00 0.00 21632.47 4830.25 28932.93 00:28:57.498 5583.00 IOPS, 21.81 MiB/s [2024-11-26T20:09:48.436Z] 10498.00 IOPS, 41.01 MiB/s 00:28:57.498 Latency(us) 00:28:57.498 [2024-11-26T20:09:48.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.498 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:57.498 Nvme1n1 : 1.01 5671.72 22.16 0.00 0.00 22477.84 6893.42 37671.06 00:28:57.498 [2024-11-26T20:09:48.436Z] =================================================================================================================== 00:28:57.498 [2024-11-26T20:09:48.436Z] Total : 5671.72 22.16 0.00 0.00 22477.84 6893.42 37671.06 00:28:57.498 00:28:57.498 Latency(us) 00:28:57.498 [2024-11-26T20:09:48.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.498 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:57.498 Nvme1n1 : 1.01 10575.11 41.31 0.00 0.00 12066.32 2354.44 17087.91 00:28:57.498 [2024-11-26T20:09:48.436Z] =================================================================================================================== 00:28:57.498 [2024-11-26T20:09:48.437Z] Total : 10575.11 41.31 0.00 0.00 12066.32 2354.44 17087.91 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4115349 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4115351 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4115353 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.755 rmmod nvme_tcp 00:28:57.755 rmmod nvme_fabrics 00:28:57.755 rmmod nvme_keyring 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4115204 ']' 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4115204 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4115204 ']' 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4115204 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115204 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115204' 00:28:57.755 killing process with pid 4115204 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4115204 00:28:57.755 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4115204 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.013 21:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.547 00:29:00.547 real 0m7.181s 00:29:00.547 user 0m13.258s 00:29:00.547 sys 0m4.162s 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:00.547 ************************************ 00:29:00.547 END TEST nvmf_bdev_io_wait 00:29:00.547 ************************************ 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:00.547 ************************************ 00:29:00.547 START TEST nvmf_queue_depth 00:29:00.547 ************************************ 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:00.547 * Looking for test storage... 00:29:00.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.547 21:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.547 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.547 --rc genhtml_branch_coverage=1 00:29:00.547 --rc genhtml_function_coverage=1 00:29:00.547 --rc genhtml_legend=1 00:29:00.548 --rc geninfo_all_blocks=1 00:29:00.548 --rc geninfo_unexecuted_blocks=1 00:29:00.548 00:29:00.548 ' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.548 --rc genhtml_branch_coverage=1 00:29:00.548 --rc genhtml_function_coverage=1 00:29:00.548 --rc genhtml_legend=1 00:29:00.548 --rc geninfo_all_blocks=1 00:29:00.548 --rc geninfo_unexecuted_blocks=1 00:29:00.548 00:29:00.548 ' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.548 --rc genhtml_branch_coverage=1 00:29:00.548 --rc genhtml_function_coverage=1 00:29:00.548 --rc genhtml_legend=1 00:29:00.548 --rc geninfo_all_blocks=1 00:29:00.548 --rc geninfo_unexecuted_blocks=1 00:29:00.548 00:29:00.548 ' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.548 --rc genhtml_branch_coverage=1 00:29:00.548 --rc genhtml_function_coverage=1 00:29:00.548 --rc genhtml_legend=1 00:29:00.548 --rc geninfo_all_blocks=1 00:29:00.548 --rc geninfo_unexecuted_blocks=1 00:29:00.548 00:29:00.548 ' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.548 21:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:02.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:02.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:02.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:02.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.452 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:29:02.453 00:29:02.453 --- 10.0.0.2 ping statistics --- 00:29:02.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.453 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:02.453 00:29:02.453 --- 10.0.0.1 ping statistics --- 00:29:02.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.453 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4117559 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4117559 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4117559 ']' 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.453 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.453 [2024-11-26 21:09:53.250326] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:02.453 [2024-11-26 21:09:53.251433] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:02.453 [2024-11-26 21:09:53.251500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.453 [2024-11-26 21:09:53.326984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.453 [2024-11-26 21:09:53.383088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.453 [2024-11-26 21:09:53.383140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.453 [2024-11-26 21:09:53.383169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.453 [2024-11-26 21:09:53.383180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.453 [2024-11-26 21:09:53.383189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.453 [2024-11-26 21:09:53.383826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.712 [2024-11-26 21:09:53.473102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.712 [2024-11-26 21:09:53.473406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 [2024-11-26 21:09:53.520466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 Malloc0 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.713 [2024-11-26 21:09:53.580587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4117595 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4117595 /var/tmp/bdevperf.sock 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4117595 ']' 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:02.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.713 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:02.713 [2024-11-26 21:09:53.634088] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:02.713 [2024-11-26 21:09:53.634169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117595 ] 00:29:02.971 [2024-11-26 21:09:53.709873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.971 [2024-11-26 21:09:53.772868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.971 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.971 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:02.971 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.971 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.971 21:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:03.229 NVMe0n1 00:29:03.229 21:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.229 21:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:03.487 Running I/O for 10 seconds... 00:29:05.354 7812.00 IOPS, 30.52 MiB/s [2024-11-26T20:09:57.668Z] 8087.00 IOPS, 31.59 MiB/s [2024-11-26T20:09:58.236Z] 7979.33 IOPS, 31.17 MiB/s [2024-11-26T20:09:59.609Z] 7986.00 IOPS, 31.20 MiB/s [2024-11-26T20:10:00.542Z] 7992.40 IOPS, 31.22 MiB/s [2024-11-26T20:10:01.476Z] 8022.17 IOPS, 31.34 MiB/s [2024-11-26T20:10:02.410Z] 8047.43 IOPS, 31.44 MiB/s [2024-11-26T20:10:03.343Z] 8061.38 IOPS, 31.49 MiB/s [2024-11-26T20:10:04.275Z] 8073.78 IOPS, 31.54 MiB/s [2024-11-26T20:10:04.533Z] 8068.00 IOPS, 31.52 MiB/s 00:29:13.595 Latency(us) 00:29:13.595 [2024-11-26T20:10:04.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.595 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:13.595 Verification LBA range: start 0x0 length 0x4000 00:29:13.595 NVMe0n1 : 10.11 8084.02 31.58 0.00 0.00 126041.14 24369.68 76118.85 00:29:13.595 [2024-11-26T20:10:04.533Z] =================================================================================================================== 00:29:13.595 [2024-11-26T20:10:04.533Z] Total : 8084.02 31.58 0.00 0.00 126041.14 24369.68 76118.85 00:29:13.595 { 00:29:13.595 "results": [ 00:29:13.595 { 00:29:13.595 "job": "NVMe0n1", 00:29:13.595 "core_mask": "0x1", 00:29:13.595 "workload": "verify", 00:29:13.595 "status": "finished", 00:29:13.595 "verify_range": { 00:29:13.595 "start": 0, 00:29:13.595 "length": 16384 00:29:13.595 }, 00:29:13.595 "queue_depth": 1024, 00:29:13.595 "io_size": 4096, 00:29:13.595 "runtime": 10.105242, 00:29:13.595 "iops": 8084.0221342546765, 00:29:13.595 "mibps": 31.57821146193233, 00:29:13.595 "io_failed": 0, 00:29:13.595 "io_timeout": 0, 00:29:13.595 "avg_latency_us": 126041.14142759278, 00:29:13.595 "min_latency_us": 24369.682962962965, 00:29:13.595 "max_latency_us": 76118.85037037038 00:29:13.595 } 00:29:13.595 ], 00:29:13.595 "core_count": 1 00:29:13.595 } 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4117595 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4117595 ']' 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4117595 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4117595 00:29:13.595 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.596 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.596 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4117595' 00:29:13.596 killing process with pid 4117595 00:29:13.596 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4117595 00:29:13.596 Received shutdown signal, test time was about 10.000000 seconds 00:29:13.596 00:29:13.596 Latency(us) 00:29:13.596 [2024-11-26T20:10:04.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.596 [2024-11-26T20:10:04.534Z] =================================================================================================================== 00:29:13.596 [2024-11-26T20:10:04.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.596 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4117595 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.854 rmmod nvme_tcp 00:29:13.854 rmmod nvme_fabrics 00:29:13.854 rmmod nvme_keyring 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4117559 ']' 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4117559 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4117559 ']' 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4117559 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4117559 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4117559' 00:29:13.854 killing process with pid 4117559 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4117559 00:29:13.854 21:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4117559 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.114 21:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.648 00:29:16.648 real 0m16.157s 00:29:16.648 user 0m22.604s 00:29:16.648 sys 0m3.202s 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:16.648 ************************************ 00:29:16.648 END TEST nvmf_queue_depth 00:29:16.648 ************************************ 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:16.648 ************************************ 00:29:16.648 START TEST nvmf_target_multipath 00:29:16.648 ************************************ 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:16.648 * Looking for test storage... 00:29:16.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.648 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.649 --rc genhtml_branch_coverage=1 00:29:16.649 --rc genhtml_function_coverage=1 00:29:16.649 --rc genhtml_legend=1 00:29:16.649 --rc geninfo_all_blocks=1 00:29:16.649 --rc geninfo_unexecuted_blocks=1 00:29:16.649 00:29:16.649 ' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.649 --rc genhtml_branch_coverage=1 00:29:16.649 --rc genhtml_function_coverage=1 00:29:16.649 --rc genhtml_legend=1 00:29:16.649 --rc geninfo_all_blocks=1 00:29:16.649 --rc geninfo_unexecuted_blocks=1 00:29:16.649 00:29:16.649 ' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.649 --rc genhtml_branch_coverage=1 00:29:16.649 --rc genhtml_function_coverage=1 00:29:16.649 --rc genhtml_legend=1 00:29:16.649 --rc geninfo_all_blocks=1 00:29:16.649 --rc geninfo_unexecuted_blocks=1 00:29:16.649 00:29:16.649 ' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.649 --rc genhtml_branch_coverage=1 00:29:16.649 --rc genhtml_function_coverage=1 00:29:16.649 --rc genhtml_legend=1 00:29:16.649 --rc geninfo_all_blocks=1 00:29:16.649 --rc geninfo_unexecuted_blocks=1 00:29:16.649 00:29:16.649 ' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.649 21:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.608 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:29:18.609 00:29:18.609 --- 10.0.0.2 ping statistics --- 00:29:18.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.609 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:29:18.609 00:29:18.609 --- 10.0.0.1 ping statistics --- 00:29:18.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.609 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:18.609 only one NIC for nvmf test 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.609 rmmod nvme_tcp 00:29:18.609 rmmod nvme_fabrics 00:29:18.609 rmmod nvme_keyring 00:29:18.609 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.610 21:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.144 00:29:21.144 real 0m4.405s 00:29:21.144 user 0m0.910s 00:29:21.144 sys 0m1.496s 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 ************************************ 00:29:21.144 END TEST nvmf_target_multipath 00:29:21.144 ************************************ 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 ************************************ 00:29:21.144 START TEST nvmf_zcopy 00:29:21.144 ************************************ 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:21.144 * Looking for test storage... 00:29:21.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.144 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.145 --rc genhtml_branch_coverage=1 00:29:21.145 --rc genhtml_function_coverage=1 00:29:21.145 --rc genhtml_legend=1 00:29:21.145 --rc geninfo_all_blocks=1 00:29:21.145 --rc geninfo_unexecuted_blocks=1 00:29:21.145 00:29:21.145 ' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.145 --rc genhtml_branch_coverage=1 00:29:21.145 --rc genhtml_function_coverage=1 00:29:21.145 --rc genhtml_legend=1 00:29:21.145 --rc geninfo_all_blocks=1 00:29:21.145 --rc geninfo_unexecuted_blocks=1 00:29:21.145 00:29:21.145 ' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.145 --rc genhtml_branch_coverage=1 00:29:21.145 --rc genhtml_function_coverage=1 00:29:21.145 --rc genhtml_legend=1 00:29:21.145 --rc geninfo_all_blocks=1 00:29:21.145 --rc geninfo_unexecuted_blocks=1 00:29:21.145 00:29:21.145 ' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.145 --rc genhtml_branch_coverage=1 00:29:21.145 --rc genhtml_function_coverage=1 00:29:21.145 --rc genhtml_legend=1 00:29:21.145 --rc geninfo_all_blocks=1 00:29:21.145 --rc geninfo_unexecuted_blocks=1 00:29:21.145 00:29:21.145 ' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.145 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.146 21:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:23.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:23.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:23.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.047 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:23.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:29:23.048 00:29:23.048 --- 10.0.0.2 ping statistics --- 00:29:23.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.048 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:29:23.048 00:29:23.048 --- 10.0.0.1 ping statistics --- 00:29:23.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.048 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4122672 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4122672 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4122672 ']' 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.048 21:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.048 [2024-11-26 21:10:13.847284] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:23.048 [2024-11-26 21:10:13.848577] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:23.048 [2024-11-26 21:10:13.848646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.048 [2024-11-26 21:10:13.936006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.307 [2024-11-26 21:10:13.997098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.307 [2024-11-26 21:10:13.997159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.307 [2024-11-26 21:10:13.997186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.307 [2024-11-26 21:10:13.997199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.307 [2024-11-26 21:10:13.997210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.307 [2024-11-26 21:10:13.997859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.307 [2024-11-26 21:10:14.092278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:23.307 [2024-11-26 21:10:14.092638] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.307 [2024-11-26 21:10:14.142545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.307 [2024-11-26 21:10:14.158760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.307 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.308 malloc0 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:23.308 { 00:29:23.308 "params": { 00:29:23.308 "name": "Nvme$subsystem", 00:29:23.308 "trtype": "$TEST_TRANSPORT", 00:29:23.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.308 "adrfam": "ipv4", 00:29:23.308 "trsvcid": "$NVMF_PORT", 00:29:23.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.308 "hdgst": ${hdgst:-false}, 00:29:23.308 "ddgst": ${ddgst:-false} 00:29:23.308 }, 00:29:23.308 "method": "bdev_nvme_attach_controller" 00:29:23.308 } 00:29:23.308 EOF 00:29:23.308 )") 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:23.308 21:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:23.308 "params": { 00:29:23.308 "name": "Nvme1", 00:29:23.308 "trtype": "tcp", 00:29:23.308 "traddr": "10.0.0.2", 00:29:23.308 "adrfam": "ipv4", 00:29:23.308 "trsvcid": "4420", 00:29:23.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.308 "hdgst": false, 00:29:23.308 "ddgst": false 00:29:23.308 }, 00:29:23.308 "method": "bdev_nvme_attach_controller" 00:29:23.308 }' 00:29:23.308 [2024-11-26 21:10:14.236145] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:23.308 [2024-11-26 21:10:14.236238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122797 ] 00:29:23.566 [2024-11-26 21:10:14.307432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.566 [2024-11-26 21:10:14.371268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.825 Running I/O for 10 seconds... 00:29:25.698 5290.00 IOPS, 41.33 MiB/s [2024-11-26T20:10:17.572Z] 5297.50 IOPS, 41.39 MiB/s [2024-11-26T20:10:18.947Z] 5332.33 IOPS, 41.66 MiB/s [2024-11-26T20:10:19.883Z] 5339.25 IOPS, 41.71 MiB/s [2024-11-26T20:10:20.818Z] 5336.80 IOPS, 41.69 MiB/s [2024-11-26T20:10:21.752Z] 5343.83 IOPS, 41.75 MiB/s [2024-11-26T20:10:22.687Z] 5340.43 IOPS, 41.72 MiB/s [2024-11-26T20:10:23.621Z] 5346.88 IOPS, 41.77 MiB/s [2024-11-26T20:10:24.997Z] 5346.44 IOPS, 41.77 MiB/s [2024-11-26T20:10:24.997Z] 5351.60 IOPS, 41.81 MiB/s 00:29:34.059 Latency(us) 00:29:34.059 [2024-11-26T20:10:24.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.059 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:34.059 Verification LBA range: start 0x0 length 0x1000 00:29:34.059 Nvme1n1 : 10.06 5330.93 41.65 0.00 0.00 23859.64 3592.34 46991.74 00:29:34.059 [2024-11-26T20:10:24.997Z] =================================================================================================================== 00:29:34.059 [2024-11-26T20:10:24.997Z] Total : 5330.93 41.65 0.00 0.00 23859.64 3592.34 46991.74 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4123981 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.059 { 00:29:34.059 "params": { 00:29:34.059 "name": "Nvme$subsystem", 00:29:34.059 "trtype": "$TEST_TRANSPORT", 00:29:34.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.059 "adrfam": "ipv4", 00:29:34.059 "trsvcid": "$NVMF_PORT", 00:29:34.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.059 "hdgst": ${hdgst:-false}, 00:29:34.059 "ddgst": ${ddgst:-false} 00:29:34.059 }, 00:29:34.059 "method": "bdev_nvme_attach_controller" 00:29:34.059 } 00:29:34.059 EOF 00:29:34.059 )") 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:34.059 [2024-11-26 21:10:24.858455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.858500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:34.059 21:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.059 "params": { 00:29:34.059 "name": "Nvme1", 00:29:34.059 "trtype": "tcp", 00:29:34.059 "traddr": "10.0.0.2", 00:29:34.059 "adrfam": "ipv4", 00:29:34.059 "trsvcid": "4420", 00:29:34.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.059 "hdgst": false, 00:29:34.059 "ddgst": false 00:29:34.059 }, 00:29:34.059 "method": "bdev_nvme_attach_controller" 00:29:34.059 }' 00:29:34.059 [2024-11-26 21:10:24.866396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.866423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.874385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.874407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.882385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.882407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.890380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.890401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.898385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.898408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.900126] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:34.059 [2024-11-26 21:10:24.900202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123981 ] 00:29:34.059 [2024-11-26 21:10:24.906384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.906406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.914383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.914405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.922383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.922405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.930384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.930407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.938393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.938417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.946392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.946415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.954393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.954416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.962394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.962418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.970393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.970417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.974204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.059 [2024-11-26 21:10:24.978395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.978419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.986427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.986465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.059 [2024-11-26 21:10:24.994404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.059 [2024-11-26 21:10:24.994434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.002396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.002421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.010393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.010418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.018393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.018418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.026394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.026418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.034392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.034416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.037924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.318 [2024-11-26 21:10:25.042394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.042418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.050393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.050417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.058426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.058464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.066423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.066461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.074423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.074463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.082426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.082466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.090436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.090479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.098422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.098460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.106396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.106422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.114428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.114466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.122424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.122463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.130413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.130446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.138394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.138417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.146393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.146416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.154411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.154442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.162400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.162428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.170448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.170476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.178402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.178430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.186395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.186420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.194393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.194418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.202393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.202417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.210393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.210418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.218400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.218427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.226399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.226426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.234396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.318 [2024-11-26 21:10:25.234422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.318 [2024-11-26 21:10:25.242392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.319 [2024-11-26 21:10:25.242417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.319 [2024-11-26 21:10:25.250393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.319 [2024-11-26 21:10:25.250417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.258397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.258423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.266394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.266418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.274400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.274427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.282395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.282420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.290393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.290418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.298393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.298416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.306393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.306416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.314397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.314423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.322396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.322423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.330394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.330418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.338393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.338418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.346392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.346416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.354393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.354417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.362394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.362420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.370403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.577 [2024-11-26 21:10:25.370439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.577 [2024-11-26 21:10:25.378399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.378428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 Running I/O for 5 seconds... 00:29:34.578 [2024-11-26 21:10:25.395772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.395803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.410048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.410093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.420550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.420581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.433442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.433473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.445158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.445189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.460594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.460627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.476154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.476185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.486499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.486531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.499183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.499215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.578 [2024-11-26 21:10:25.510237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.578 [2024-11-26 21:10:25.510267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.522782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.522813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.534123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.534154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.546028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.546059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.558107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.558138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.569605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.569635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.836 [2024-11-26 21:10:25.581921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.836 [2024-11-26 21:10:25.581950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.593646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.593675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.605302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.605342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.618911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.618938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.629427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.629459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.644508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.644538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.655436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.655467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.668171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.668202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.684418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.684448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.701113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.701144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.715901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.715928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.725891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.725919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.738593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.738623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.750763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.750805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.837 [2024-11-26 21:10:25.762213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.837 [2024-11-26 21:10:25.762243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.776045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.776076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.786061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.786092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.798611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.798642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.809550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.809581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.820658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.820698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.835867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.835894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.845639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.845678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.858406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.858436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.869759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.869785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.881307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.881338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.095 [2024-11-26 21:10:25.894698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.095 [2024-11-26 21:10:25.894738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.905020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.905066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.920342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.920373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.936859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.936900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.947412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.947442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.959928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.959956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.973607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.973638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.983824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.983852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:25.998902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:25.998931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:26.009217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:26.009248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.096 [2024-11-26 21:10:26.022156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.096 [2024-11-26 21:10:26.022187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.034103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.034134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.046006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.046047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.058145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.058177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.069822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.069848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.081277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.081307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.094561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.094591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.104380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.104409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.119418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.119449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.354 [2024-11-26 21:10:26.129454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.354 [2024-11-26 21:10:26.129484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.144396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.144425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.154737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.154780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.167138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.167170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.178380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.178407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.189778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.189804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.204730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.204773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.219266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.219297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.229713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.229757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.242265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.242295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.253202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.253232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.266959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.267004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.355 [2024-11-26 21:10:26.277205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.355 [2024-11-26 21:10:26.277235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.292258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.292289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.302377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.302407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.314857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.314884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.325512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.325537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.337024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.337067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.350985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.351026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.361277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.361307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.375313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.375344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.385655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.385696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 10861.00 IOPS, 84.85 MiB/s [2024-11-26T20:10:26.552Z] [2024-11-26 21:10:26.398761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.398788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.410271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.410305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.422352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.422383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.434732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.434759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.447067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.447097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.459052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.459082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.470555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.470585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.482274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.482307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.493798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.614 [2024-11-26 21:10:26.493826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.614 [2024-11-26 21:10:26.506194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.615 [2024-11-26 21:10:26.506224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.615 [2024-11-26 21:10:26.518195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.615 [2024-11-26 21:10:26.518226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.615 [2024-11-26 21:10:26.529531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.615 [2024-11-26 21:10:26.529572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.615 [2024-11-26 21:10:26.541573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.615 [2024-11-26 21:10:26.541604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.553864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.553892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.565904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.565932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.577959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.578003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.589809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.589837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.601367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.601397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.617615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.617645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.627925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.627955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.640666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.640706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.654322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.654352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.664900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.664928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.677801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.677833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.690612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.690642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.700502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.700533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.713321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.713351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.724975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.725002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.739235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.739266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.749259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.749290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.761417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.761459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.772163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.772193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.784626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.784657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.797902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.797929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.874 [2024-11-26 21:10:26.807742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.874 [2024-11-26 21:10:26.807770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.824533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.824564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.841424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.841455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.851478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.851508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.868761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.868787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.883922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.883950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.894054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.894085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.907168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.907199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.919438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.919468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.935852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.935884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.946226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.946256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.959071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.959102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.976910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.976939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:26.991867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:26.991895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:27.002053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:27.002084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:27.015475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:27.015514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:27.027242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:27.027272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:27.044807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:27.044834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.134 [2024-11-26 21:10:27.055596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.134 [2024-11-26 21:10:27.055626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.073089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.073120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.083945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.083990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.096767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.096794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.112653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.112683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.128411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.128441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.139374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.139406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.151802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.151845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.168800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.168827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.184304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.184335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.201149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.201180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.211478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.211508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.224379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.224410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.238596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.238626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.248310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.248340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.260667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.260707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.274752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.274792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.284579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.284609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.297093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.297123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.310589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.310621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.394 [2024-11-26 21:10:27.321881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.394 [2024-11-26 21:10:27.321909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.652 [2024-11-26 21:10:27.334763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.334790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.346674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.346714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.358649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.358680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.370572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.370602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.382481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.382512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 10833.50 IOPS, 84.64 MiB/s [2024-11-26T20:10:27.591Z] [2024-11-26 21:10:27.393909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.393936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.406304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.406334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.418236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.418266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.430068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.430099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.441999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.442026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.453865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.453892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.465630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.465662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.477476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.477506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.489493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.489523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.501248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.501278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.513676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.513716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.525930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.525955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.537441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.537472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.548998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.549024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.563566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.563596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.573828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.573855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.653 [2024-11-26 21:10:27.586498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.653 [2024-11-26 21:10:27.586528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.598438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.598468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.609368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.609398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.623773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.623800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.634448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.634479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.647204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.647235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.659164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.659195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.670747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.670774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.682511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.682542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.911 [2024-11-26 21:10:27.695196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.911 [2024-11-26 21:10:27.695227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.711750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.711775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.722133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.722163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.735010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.735035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.746583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.746613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.758336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.758366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.769453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.769483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.782251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.782281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.794308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.794339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.806478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.806519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.818272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.818307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.830261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.830292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.912 [2024-11-26 21:10:27.842250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.912 [2024-11-26 21:10:27.842281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.854069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.854100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.865800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.865827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.877902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.877928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.888786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.888812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.903679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.903719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.914367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.914397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.926828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.926855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.938353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.938383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.170 [2024-11-26 21:10:27.950386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.170 [2024-11-26 21:10:27.950417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:27.962198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:27.962228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:27.974253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:27.974283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:27.985449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:27.985480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:27.997377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:27.997407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.011013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.011044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.021402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.021431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.034246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.034278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.046281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.046313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.058083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.058115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.069643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.069673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.080673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.080714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.095560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.095590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.171 [2024-11-26 21:10:28.105935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.171 [2024-11-26 21:10:28.105981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.118989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.429 [2024-11-26 21:10:28.119034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.130308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.429 [2024-11-26 21:10:28.130339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.142776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.429 [2024-11-26 21:10:28.142804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.154501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.429 [2024-11-26 21:10:28.154532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.166424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.429 [2024-11-26 21:10:28.166454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.429 [2024-11-26 21:10:28.178104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.178151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.189775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.189802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.201280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.201312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.213512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.213542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.226700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.226744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.237061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.237091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.254628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.254659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.265207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.265237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.279918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.279947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.296636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.296667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.307671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.307713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.320839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.320864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.335020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.335051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.345008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.345052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.430 [2024-11-26 21:10:28.360907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.430 [2024-11-26 21:10:28.360935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.376405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.376435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 10809.33 IOPS, 84.45 MiB/s [2024-11-26T20:10:28.627Z] [2024-11-26 21:10:28.392184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.392217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.408470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.408501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.419133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.419164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.431865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.431901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.448482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.448514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.464833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.464860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.480832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.480859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.496598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.496629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.511173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.511207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.521446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.521476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.534236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.534266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.546769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.546796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.558574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.558605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.571246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.571277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.588092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.588123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.604135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.604165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.689 [2024-11-26 21:10:28.620824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.689 [2024-11-26 21:10:28.620852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.635519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.635550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.646563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.646594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.659528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.659558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.677569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.677599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.688102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.688132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.703266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.703310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.713982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.714026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.728529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.728559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.739485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.739515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.755991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.756037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.771887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.771914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.789269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.789299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.800007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.800034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.816062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.948 [2024-11-26 21:10:28.816105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.948 [2024-11-26 21:10:28.832270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.949 [2024-11-26 21:10:28.832302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.949 [2024-11-26 21:10:28.849275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.949 [2024-11-26 21:10:28.849306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.949 [2024-11-26 21:10:28.859630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.949 [2024-11-26 21:10:28.859656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.949 [2024-11-26 21:10:28.877012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.949 [2024-11-26 21:10:28.877042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.891419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.891450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.902043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.902073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.915139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.915171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.932809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.932836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.948443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.948473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.962979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.963024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.973584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.973615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:28.987061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:28.987091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.003940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.003981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.021263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.021294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.031798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.031825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.049340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.049370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.059955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.059997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.075437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.075469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.085857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.085883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.100646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.100676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.114236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.114267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.124220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.124250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.207 [2024-11-26 21:10:29.141875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.207 [2024-11-26 21:10:29.141903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.151848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.151876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.168375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.168405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.184322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.184353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.201271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.201302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.211671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.211713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.228777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.228805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.239941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.239984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.257316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.257347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.268453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.268483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.282755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.282782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.292893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.292920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.307965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.308006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.324946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.469 [2024-11-26 21:10:29.324987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.469 [2024-11-26 21:10:29.335525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.470 [2024-11-26 21:10:29.335555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.470 [2024-11-26 21:10:29.352999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.470 [2024-11-26 21:10:29.353040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.470 [2024-11-26 21:10:29.363576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.470 [2024-11-26 21:10:29.363607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.470 [2024-11-26 21:10:29.380898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.470 [2024-11-26 21:10:29.380927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.470 10786.75 IOPS, 84.27 MiB/s [2024-11-26T20:10:29.408Z] [2024-11-26 21:10:29.396377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.470 [2024-11-26 21:10:29.396408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.412901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.412928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.425989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.426034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.436914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.436941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.450203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.450233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.462223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.462254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.473580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.473609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.485771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.485798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.497443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.497474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.508954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.508999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.523330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.523360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.534054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.534084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.546949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.546977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.564023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.564055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.581022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.581053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.596592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.596623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.613263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.613293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.627409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.627439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.638329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.638359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.651357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.651388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.732 [2024-11-26 21:10:29.668223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.732 [2024-11-26 21:10:29.668256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.685459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.685489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.696144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.696174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.711372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.711403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.721806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.721833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.734988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.735033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.751074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.751114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.760837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.760865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.776830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.776857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.789782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.789810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.800110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.800158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.813097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.813128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.826945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.826973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.837057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.837089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.851639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.851669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.861352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.861382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.876165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.876195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.892778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.892819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.908438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.908468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.991 [2024-11-26 21:10:29.919256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.991 [2024-11-26 21:10:29.919286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.932496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.932527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.949081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.949112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.959524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.959554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.975486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.975517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.985804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.985847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:29.998744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:29.998779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:30.012856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:30.012891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:30.027061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:30.027107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:30.038416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.250 [2024-11-26 21:10:30.038446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.250 [2024-11-26 21:10:30.051519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.051550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.067878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.067907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.078266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.078297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.091138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.091169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.102818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.102845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.115072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.115102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.127417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.127447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.145029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.145074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.160180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.160210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.170711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.170754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.251 [2024-11-26 21:10:30.183316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.251 [2024-11-26 21:10:30.183346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.193615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.193645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.205859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.205887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.216446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.216476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.231221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.231251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.241821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.241864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.254325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.254355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.266110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.266141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.277839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.277866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.289781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.289807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.301262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.301292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.312170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.312201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.326515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.326545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.336951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.336997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.350040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.350071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.362141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.362172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.373713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.373758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 [2024-11-26 21:10:30.386565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.509 [2024-11-26 21:10:30.386596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.509 10775.20 IOPS, 84.18 MiB/s [2024-11-26T20:10:30.447Z] [2024-11-26 21:10:30.398271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.398304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 [2024-11-26 21:10:30.405841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.405869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 00:29:39.510 Latency(us) 00:29:39.510 [2024-11-26T20:10:30.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.510 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:39.510 Nvme1n1 : 5.01 10774.32 84.17 0.00 0.00 11863.51 3021.94 19418.07 00:29:39.510 [2024-11-26T20:10:30.448Z] =================================================================================================================== 00:29:39.510 [2024-11-26T20:10:30.448Z] Total : 10774.32 84.17 0.00 0.00 11863.51 3021.94 19418.07 00:29:39.510 [2024-11-26 21:10:30.410390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.410416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 [2024-11-26 21:10:30.418437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.418466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 [2024-11-26 21:10:30.426396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.426423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 [2024-11-26 21:10:30.434421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.434465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.510 [2024-11-26 21:10:30.442443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.510 [2024-11-26 21:10:30.442496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.450446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.450499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.458448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.458501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.466434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.466479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.474430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.474477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.482447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.482493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.490444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.490489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.498442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.498490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.506446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.506502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.514450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.514504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.522440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.522486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.530438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.768 [2024-11-26 21:10:30.530485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.768 [2024-11-26 21:10:30.538440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.538483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.546441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.546490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.554419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.554458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.562394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.562418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.570396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.570422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.578394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.578419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.586393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.586418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.594427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.594471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.602433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.602500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.610434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.610482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.618394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.618419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.626393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.626418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 [2024-11-26 21:10:30.634394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:39.769 [2024-11-26 21:10:30.634419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:39.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4123981) - No such process 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4123981 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.769 delay0 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.769 21:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:40.027 [2024-11-26 21:10:30.765852] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:48.233 Initializing NVMe Controllers 00:29:48.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:48.233 Initialization complete. Launching workers. 00:29:48.233 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 23340 00:29:48.233 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23434, failed to submit 140 00:29:48.233 success 23352, unsuccessful 82, failed 0 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.233 rmmod nvme_tcp 00:29:48.233 rmmod nvme_fabrics 00:29:48.233 rmmod nvme_keyring 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4122672 ']' 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4122672 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4122672 ']' 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4122672 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122672 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122672' 00:29:48.233 killing process with pid 4122672 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4122672 00:29:48.233 21:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4122672 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.233 21:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.613 00:29:49.613 real 0m28.688s 00:29:49.613 user 0m40.622s 00:29:49.613 sys 0m10.382s 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:49.613 ************************************ 00:29:49.613 END TEST nvmf_zcopy 00:29:49.613 ************************************ 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.613 ************************************ 00:29:49.613 START TEST nvmf_nmic 00:29:49.613 ************************************ 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:49.613 * Looking for test storage... 00:29:49.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.613 --rc genhtml_branch_coverage=1 00:29:49.613 --rc genhtml_function_coverage=1 00:29:49.613 --rc genhtml_legend=1 00:29:49.613 --rc geninfo_all_blocks=1 00:29:49.613 --rc geninfo_unexecuted_blocks=1 00:29:49.613 00:29:49.613 ' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.613 --rc genhtml_branch_coverage=1 00:29:49.613 --rc genhtml_function_coverage=1 00:29:49.613 --rc genhtml_legend=1 00:29:49.613 --rc geninfo_all_blocks=1 00:29:49.613 --rc geninfo_unexecuted_blocks=1 00:29:49.613 00:29:49.613 ' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.613 --rc genhtml_branch_coverage=1 00:29:49.613 --rc genhtml_function_coverage=1 00:29:49.613 --rc genhtml_legend=1 00:29:49.613 --rc geninfo_all_blocks=1 00:29:49.613 --rc geninfo_unexecuted_blocks=1 00:29:49.613 00:29:49.613 ' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.613 --rc genhtml_branch_coverage=1 00:29:49.613 --rc genhtml_function_coverage=1 00:29:49.613 --rc genhtml_legend=1 00:29:49.613 --rc geninfo_all_blocks=1 00:29:49.613 --rc geninfo_unexecuted_blocks=1 00:29:49.613 00:29:49.613 ' 00:29:49.613 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.614 21:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:52.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:52.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:52.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:52.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.148 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:29:52.149 00:29:52.149 --- 10.0.0.2 ping statistics --- 00:29:52.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.149 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:29:52.149 00:29:52.149 --- 10.0.0.1 ping statistics --- 00:29:52.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.149 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4127470 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4127470 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4127470 ']' 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.149 21:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.149 [2024-11-26 21:10:42.854281] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.149 [2024-11-26 21:10:42.855485] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:29:52.149 [2024-11-26 21:10:42.855549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.149 [2024-11-26 21:10:42.933399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.149 [2024-11-26 21:10:42.994157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.149 [2024-11-26 21:10:42.994243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.149 [2024-11-26 21:10:42.994257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.149 [2024-11-26 21:10:42.994268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.149 [2024-11-26 21:10:42.994277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.149 [2024-11-26 21:10:42.996030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.149 [2024-11-26 21:10:42.996057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.149 [2024-11-26 21:10:42.996179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.149 [2024-11-26 21:10:42.996182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.407 [2024-11-26 21:10:43.095315] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.407 [2024-11-26 21:10:43.095499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:52.407 [2024-11-26 21:10:43.095798] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:52.407 [2024-11-26 21:10:43.096466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.407 [2024-11-26 21:10:43.096754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.407 [2024-11-26 21:10:43.152881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.407 Malloc0 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.407 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 [2024-11-26 21:10:43.229085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:52.408 test case1: single bdev can't be used in multiple subsystems 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 [2024-11-26 21:10:43.252799] bdev.c:8323:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:52.408 [2024-11-26 21:10:43.252831] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:52.408 [2024-11-26 21:10:43.252845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.408 request: 00:29:52.408 { 00:29:52.408 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:52.408 "namespace": { 00:29:52.408 "bdev_name": "Malloc0", 00:29:52.408 "no_auto_visible": false 00:29:52.408 }, 00:29:52.408 "method": "nvmf_subsystem_add_ns", 00:29:52.408 "req_id": 1 00:29:52.408 } 00:29:52.408 Got JSON-RPC error response 00:29:52.408 response: 00:29:52.408 { 00:29:52.408 "code": -32602, 00:29:52.408 "message": "Invalid parameters" 00:29:52.408 } 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:52.408 Adding namespace failed - expected result. 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:52.408 test case2: host connect to nvmf target in multiple paths 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:52.408 [2024-11-26 21:10:43.260870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.408 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:52.667 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:52.925 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:52.925 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:52.925 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:52.925 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:52.925 21:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:55.451 21:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:55.451 [global] 00:29:55.451 thread=1 00:29:55.451 invalidate=1 00:29:55.451 rw=write 00:29:55.451 time_based=1 00:29:55.451 runtime=1 00:29:55.451 ioengine=libaio 00:29:55.451 direct=1 00:29:55.451 bs=4096 00:29:55.451 iodepth=1 00:29:55.451 norandommap=0 00:29:55.451 numjobs=1 00:29:55.451 00:29:55.451 verify_dump=1 00:29:55.451 verify_backlog=512 00:29:55.451 verify_state_save=0 00:29:55.451 do_verify=1 00:29:55.451 verify=crc32c-intel 00:29:55.451 [job0] 00:29:55.451 filename=/dev/nvme0n1 00:29:55.451 Could not set queue depth (nvme0n1) 00:29:55.451 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:55.451 fio-3.35 00:29:55.451 Starting 1 thread 00:29:56.385 00:29:56.385 job0: (groupid=0, jobs=1): err= 0: pid=4127884: Tue Nov 26 21:10:47 2024 00:29:56.385 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:29:56.385 slat (nsec): min=6536, max=47839, avg=12791.40, stdev=4917.84 00:29:56.385 clat (usec): min=259, max=502, avg=310.21, stdev=21.22 00:29:56.385 lat (usec): min=268, max=532, avg=323.00, stdev=24.25 00:29:56.385 clat percentiles (usec): 00:29:56.385 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:29:56.385 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:29:56.385 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 338], 00:29:56.385 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 502], 99.95th=[ 502], 00:29:56.385 | 99.99th=[ 502] 00:29:56.385 write: IOPS=2037, BW=8152KiB/s (8347kB/s)(8160KiB/1001msec); 0 zone resets 00:29:56.385 slat (nsec): min=8468, max=67589, avg=19387.41, stdev=5641.99 00:29:56.385 clat (usec): min=168, max=636, avg=219.44, stdev=32.73 00:29:56.385 lat (usec): min=179, max=657, avg=238.82, stdev=34.11 00:29:56.385 clat percentiles (usec): 00:29:56.385 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 200], 00:29:56.385 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:29:56.385 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 269], 00:29:56.385 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 412], 99.95th=[ 453], 00:29:56.385 | 99.99th=[ 635] 00:29:56.385 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:29:56.385 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:56.385 lat (usec) : 250=51.85%, 500=48.07%, 750=0.08% 00:29:56.385 cpu : usr=4.80%, sys=7.90%, ctx=3576, majf=0, minf=1 00:29:56.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:56.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.385 issued rwts: total=1536,2040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:56.385 00:29:56.385 Run status group 0 (all jobs): 00:29:56.385 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:29:56.385 WRITE: bw=8152KiB/s (8347kB/s), 8152KiB/s-8152KiB/s (8347kB/s-8347kB/s), io=8160KiB (8356kB), run=1001-1001msec 00:29:56.385 00:29:56.385 Disk stats (read/write): 00:29:56.385 nvme0n1: ios=1586/1596, merge=0/0, ticks=492/339, in_queue=831, util=91.58% 00:29:56.385 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:56.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:56.385 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:56.385 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:56.385 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:56.385 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.643 rmmod nvme_tcp 00:29:56.643 rmmod nvme_fabrics 00:29:56.643 rmmod nvme_keyring 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4127470 ']' 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4127470 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4127470 ']' 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4127470 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127470 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127470' 00:29:56.643 killing process with pid 4127470 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4127470 00:29:56.643 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4127470 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.902 21:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.437 00:29:59.437 real 0m9.456s 00:29:59.437 user 0m17.443s 00:29:59.437 sys 0m3.597s 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.437 ************************************ 00:29:59.437 END TEST nvmf_nmic 00:29:59.437 ************************************ 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.437 ************************************ 00:29:59.437 START TEST nvmf_fio_target 00:29:59.437 ************************************ 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:59.437 * Looking for test storage... 00:29:59.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.437 --rc genhtml_branch_coverage=1 00:29:59.437 --rc genhtml_function_coverage=1 00:29:59.437 --rc genhtml_legend=1 00:29:59.437 --rc geninfo_all_blocks=1 00:29:59.437 --rc geninfo_unexecuted_blocks=1 00:29:59.437 00:29:59.437 ' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.437 --rc genhtml_branch_coverage=1 00:29:59.437 --rc genhtml_function_coverage=1 00:29:59.437 --rc genhtml_legend=1 00:29:59.437 --rc geninfo_all_blocks=1 00:29:59.437 --rc geninfo_unexecuted_blocks=1 00:29:59.437 00:29:59.437 ' 00:29:59.437 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.437 --rc genhtml_branch_coverage=1 00:29:59.437 --rc genhtml_function_coverage=1 00:29:59.437 --rc genhtml_legend=1 00:29:59.437 --rc geninfo_all_blocks=1 00:29:59.437 --rc geninfo_unexecuted_blocks=1 00:29:59.437 00:29:59.438 ' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.438 --rc genhtml_branch_coverage=1 00:29:59.438 --rc genhtml_function_coverage=1 00:29:59.438 --rc genhtml_legend=1 00:29:59.438 --rc geninfo_all_blocks=1 00:29:59.438 --rc geninfo_unexecuted_blocks=1 00:29:59.438 00:29:59.438 ' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.438 21:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.341 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:30:01.342 00:30:01.342 --- 10.0.0.2 ping statistics --- 00:30:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.342 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:30:01.342 00:30:01.342 --- 10.0.0.1 ping statistics --- 00:30:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.342 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4130074 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4130074 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4130074 ']' 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.342 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.602 [2024-11-26 21:10:52.324486] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.602 [2024-11-26 21:10:52.325649] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:30:01.602 [2024-11-26 21:10:52.325764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.602 [2024-11-26 21:10:52.399000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.602 [2024-11-26 21:10:52.456600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.602 [2024-11-26 21:10:52.456659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.602 [2024-11-26 21:10:52.456681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.602 [2024-11-26 21:10:52.456715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.602 [2024-11-26 21:10:52.456726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.602 [2024-11-26 21:10:52.458298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.602 [2024-11-26 21:10:52.458347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.602 [2024-11-26 21:10:52.458404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.602 [2024-11-26 21:10:52.458407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.861 [2024-11-26 21:10:52.546959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:01.861 [2024-11-26 21:10:52.547130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:01.861 [2024-11-26 21:10:52.547427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:01.861 [2024-11-26 21:10:52.548048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:01.861 [2024-11-26 21:10:52.548319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.861 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.120 [2024-11-26 21:10:52.847125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.120 21:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:02.378 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:02.378 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:02.637 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:02.637 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:02.895 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:02.895 21:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.154 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:03.154 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:03.719 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.719 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:03.719 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:04.286 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:04.286 21:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:04.286 21:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:04.286 21:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:04.852 21:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:04.852 21:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:04.852 21:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.418 21:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:05.418 21:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:05.418 21:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.983 [2024-11-26 21:10:56.623295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.983 21:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:05.983 21:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:06.549 21:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:09.075 21:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:09.075 [global] 00:30:09.075 thread=1 00:30:09.075 invalidate=1 00:30:09.075 rw=write 00:30:09.075 time_based=1 00:30:09.075 runtime=1 00:30:09.075 ioengine=libaio 00:30:09.075 direct=1 00:30:09.075 bs=4096 00:30:09.075 iodepth=1 00:30:09.075 norandommap=0 00:30:09.075 numjobs=1 00:30:09.075 00:30:09.075 verify_dump=1 00:30:09.075 verify_backlog=512 00:30:09.075 verify_state_save=0 00:30:09.075 do_verify=1 00:30:09.075 verify=crc32c-intel 00:30:09.075 [job0] 00:30:09.075 filename=/dev/nvme0n1 00:30:09.075 [job1] 00:30:09.076 filename=/dev/nvme0n2 00:30:09.076 [job2] 00:30:09.076 filename=/dev/nvme0n3 00:30:09.076 [job3] 00:30:09.076 filename=/dev/nvme0n4 00:30:09.076 Could not set queue depth (nvme0n1) 00:30:09.076 Could not set queue depth (nvme0n2) 00:30:09.076 Could not set queue depth (nvme0n3) 00:30:09.076 Could not set queue depth (nvme0n4) 00:30:09.076 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:09.076 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:09.076 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:09.076 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:09.076 fio-3.35 00:30:09.076 Starting 4 threads 00:30:10.011 00:30:10.011 job0: (groupid=0, jobs=1): err= 0: pid=4131026: Tue Nov 26 21:11:00 2024 00:30:10.011 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:10.011 slat (nsec): min=5898, max=51813, avg=10473.49, stdev=5684.10 00:30:10.011 clat (usec): min=256, max=41069, avg=370.10, stdev=1041.77 00:30:10.011 lat (usec): min=263, max=41078, avg=380.58, stdev=1041.70 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:30:10.011 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:30:10.011 | 70.00th=[ 330], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 494], 00:30:10.011 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 635], 99.95th=[41157], 00:30:10.011 | 99.99th=[41157] 00:30:10.011 write: IOPS=1768, BW=7073KiB/s (7243kB/s)(7080KiB/1001msec); 0 zone resets 00:30:10.011 slat (nsec): min=7461, max=78201, avg=11492.94, stdev=5541.31 00:30:10.011 clat (usec): min=154, max=1344, avg=216.95, stdev=63.51 00:30:10.011 lat (usec): min=162, max=1354, avg=228.45, stdev=65.13 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:30:10.011 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 198], 60.00th=[ 215], 00:30:10.011 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 277], 95.00th=[ 318], 00:30:10.011 | 99.00th=[ 408], 99.50th=[ 465], 99.90th=[ 1074], 99.95th=[ 1352], 00:30:10.011 | 99.99th=[ 1352] 00:30:10.011 bw ( KiB/s): min= 8192, max= 8192, per=46.05%, avg=8192.00, stdev= 0.00, samples=1 00:30:10.011 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:10.011 lat (usec) : 250=44.07%, 500=53.75%, 750=2.06%, 1000=0.03% 00:30:10.011 lat (msec) : 2=0.06%, 50=0.03% 00:30:10.011 cpu : usr=3.00%, sys=4.60%, ctx=3308, majf=0, minf=2 00:30:10.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.011 issued rwts: total=1536,1770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:10.011 job1: (groupid=0, jobs=1): err= 0: pid=4131027: Tue Nov 26 21:11:00 2024 00:30:10.011 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:30:10.011 slat (nsec): min=12867, max=35364, avg=18949.57, stdev=8650.89 00:30:10.011 clat (usec): min=40863, max=42370, avg=41233.90, stdev=487.62 00:30:10.011 lat (usec): min=40898, max=42390, avg=41252.85, stdev=489.95 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:10.011 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:10.011 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:30:10.011 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:10.011 | 99.99th=[42206] 00:30:10.011 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:30:10.011 slat (usec): min=8, max=16569, avg=50.88, stdev=734.19 00:30:10.011 clat (usec): min=186, max=1084, avg=276.48, stdev=101.75 00:30:10.011 lat (usec): min=199, max=16965, avg=327.36, stdev=746.15 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 229], 00:30:10.011 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:30:10.011 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 334], 95.00th=[ 367], 00:30:10.011 | 99.00th=[ 816], 99.50th=[ 996], 99.90th=[ 1090], 99.95th=[ 1090], 00:30:10.011 | 99.99th=[ 1090] 00:30:10.011 bw ( KiB/s): min= 4096, max= 4096, per=23.03%, avg=4096.00, stdev= 0.00, samples=1 00:30:10.011 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:10.011 lat (usec) : 250=42.96%, 500=50.84%, 750=0.56%, 1000=1.31% 00:30:10.011 lat (msec) : 2=0.38%, 50=3.94% 00:30:10.011 cpu : usr=0.58%, sys=0.77%, ctx=538, majf=0, minf=1 00:30:10.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.011 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:10.011 job2: (groupid=0, jobs=1): err= 0: pid=4131028: Tue Nov 26 21:11:00 2024 00:30:10.011 read: IOPS=514, BW=2057KiB/s (2106kB/s)(2104KiB/1023msec) 00:30:10.011 slat (nsec): min=7488, max=45551, avg=15744.38, stdev=5415.46 00:30:10.011 clat (usec): min=271, max=41157, avg=1413.90, stdev=6551.42 00:30:10.011 lat (usec): min=279, max=41166, avg=1429.64, stdev=6551.07 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:30:10.011 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:30:10.011 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 388], 00:30:10.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:10.011 | 99.99th=[41157] 00:30:10.011 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:30:10.011 slat (nsec): min=8108, max=57567, avg=15662.61, stdev=6700.09 00:30:10.011 clat (usec): min=184, max=3779, avg=241.86, stdev=126.54 00:30:10.011 lat (usec): min=192, max=3793, avg=257.52, stdev=127.16 00:30:10.011 clat percentiles (usec): 00:30:10.011 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:30:10.011 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:30:10.011 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 310], 00:30:10.011 | 99.00th=[ 388], 99.50th=[ 490], 99.90th=[ 1401], 99.95th=[ 3785], 00:30:10.012 | 99.99th=[ 3785] 00:30:10.012 bw ( KiB/s): min= 8192, max= 8192, per=46.05%, avg=8192.00, stdev= 0.00, samples=1 00:30:10.012 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:10.012 lat (usec) : 250=50.58%, 500=48.13%, 750=0.13% 00:30:10.012 lat (msec) : 2=0.19%, 4=0.06%, 50=0.90% 00:30:10.012 cpu : usr=1.76%, sys=2.94%, ctx=1551, majf=0, minf=1 00:30:10.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.012 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:10.012 job3: (groupid=0, jobs=1): err= 0: pid=4131029: Tue Nov 26 21:11:00 2024 00:30:10.012 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:30:10.012 slat (nsec): min=6129, max=34698, avg=7338.42, stdev=2460.97 00:30:10.012 clat (usec): min=255, max=42198, avg=682.10, stdev=4005.77 00:30:10.012 lat (usec): min=261, max=42206, avg=689.44, stdev=4006.89 00:30:10.012 clat percentiles (usec): 00:30:10.012 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 269], 00:30:10.012 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:30:10.012 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 347], 00:30:10.012 | 99.00th=[ 742], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:30:10.012 | 99.99th=[42206] 00:30:10.012 write: IOPS=1299, BW=5199KiB/s (5324kB/s)(5204KiB/1001msec); 0 zone resets 00:30:10.012 slat (nsec): min=7678, max=35574, avg=10670.24, stdev=4142.39 00:30:10.012 clat (usec): min=175, max=1517, avg=210.48, stdev=56.81 00:30:10.012 lat (usec): min=185, max=1553, avg=221.15, stdev=58.02 00:30:10.012 clat percentiles (usec): 00:30:10.012 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:30:10.012 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:30:10.012 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 247], 00:30:10.012 | 99.00th=[ 302], 99.50th=[ 371], 99.90th=[ 1106], 99.95th=[ 1516], 00:30:10.012 | 99.99th=[ 1516] 00:30:10.012 bw ( KiB/s): min= 4096, max= 4096, per=23.03%, avg=4096.00, stdev= 0.00, samples=1 00:30:10.012 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:10.012 lat (usec) : 250=53.72%, 500=45.08%, 750=0.60%, 1000=0.09% 00:30:10.012 lat (msec) : 2=0.09%, 50=0.43% 00:30:10.012 cpu : usr=1.80%, sys=2.50%, ctx=2326, majf=0, minf=1 00:30:10.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.012 issued rwts: total=1024,1301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:10.012 00:30:10.012 Run status group 0 (all jobs): 00:30:10.012 READ: bw=11.7MiB/s (12.3MB/s), 81.1KiB/s-6138KiB/s (83.0kB/s-6285kB/s), io=12.1MiB (12.7MB), run=1001-1036msec 00:30:10.012 WRITE: bw=17.4MiB/s (18.2MB/s), 1977KiB/s-7073KiB/s (2024kB/s-7243kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:30:10.012 00:30:10.012 Disk stats (read/write): 00:30:10.012 nvme0n1: ios=1310/1536, merge=0/0, ticks=632/300, in_queue=932, util=85.17% 00:30:10.012 nvme0n2: ios=70/512, merge=0/0, ticks=781/140, in_queue=921, util=91.55% 00:30:10.012 nvme0n3: ios=578/1024, merge=0/0, ticks=777/235, in_queue=1012, util=93.08% 00:30:10.012 nvme0n4: ios=748/1024, merge=0/0, ticks=1092/206, in_queue=1298, util=94.29% 00:30:10.012 21:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:10.012 [global] 00:30:10.012 thread=1 00:30:10.012 invalidate=1 00:30:10.012 rw=randwrite 00:30:10.012 time_based=1 00:30:10.012 runtime=1 00:30:10.012 ioengine=libaio 00:30:10.012 direct=1 00:30:10.012 bs=4096 00:30:10.012 iodepth=1 00:30:10.012 norandommap=0 00:30:10.012 numjobs=1 00:30:10.012 00:30:10.012 verify_dump=1 00:30:10.012 verify_backlog=512 00:30:10.012 verify_state_save=0 00:30:10.012 do_verify=1 00:30:10.012 verify=crc32c-intel 00:30:10.012 [job0] 00:30:10.012 filename=/dev/nvme0n1 00:30:10.012 [job1] 00:30:10.012 filename=/dev/nvme0n2 00:30:10.012 [job2] 00:30:10.012 filename=/dev/nvme0n3 00:30:10.012 [job3] 00:30:10.012 filename=/dev/nvme0n4 00:30:10.270 Could not set queue depth (nvme0n1) 00:30:10.270 Could not set queue depth (nvme0n2) 00:30:10.270 Could not set queue depth (nvme0n3) 00:30:10.270 Could not set queue depth (nvme0n4) 00:30:10.270 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:10.270 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:10.270 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:10.270 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:10.270 fio-3.35 00:30:10.270 Starting 4 threads 00:30:11.642 00:30:11.642 job0: (groupid=0, jobs=1): err= 0: pid=4131372: Tue Nov 26 21:11:02 2024 00:30:11.642 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:11.642 slat (nsec): min=5712, max=66098, avg=13431.83, stdev=7562.81 00:30:11.642 clat (usec): min=244, max=1695, avg=317.37, stdev=76.30 00:30:11.642 lat (usec): min=252, max=1702, avg=330.81, stdev=78.90 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 00:30:11.642 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:30:11.642 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 445], 95.00th=[ 482], 00:30:11.642 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 1020], 99.95th=[ 1696], 00:30:11.642 | 99.99th=[ 1696] 00:30:11.642 write: IOPS=1916, BW=7664KiB/s (7848kB/s)(7672KiB/1001msec); 0 zone resets 00:30:11.642 slat (nsec): min=7310, max=71217, avg=16663.71, stdev=7487.65 00:30:11.642 clat (usec): min=163, max=949, avg=231.60, stdev=38.67 00:30:11.642 lat (usec): min=171, max=959, avg=248.26, stdev=39.98 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 208], 00:30:11.642 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:30:11.642 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 285], 95.00th=[ 297], 00:30:11.642 | 99.00th=[ 347], 99.50th=[ 400], 99.90th=[ 449], 99.95th=[ 947], 00:30:11.642 | 99.99th=[ 947] 00:30:11.642 bw ( KiB/s): min= 8192, max= 8192, per=47.38%, avg=8192.00, stdev= 0.00, samples=1 00:30:11.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:11.642 lat (usec) : 250=45.48%, 500=53.24%, 750=1.16%, 1000=0.06% 00:30:11.642 lat (msec) : 2=0.06% 00:30:11.642 cpu : usr=4.30%, sys=6.70%, ctx=3457, majf=0, minf=1 00:30:11.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.642 issued rwts: total=1536,1918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:11.642 job1: (groupid=0, jobs=1): err= 0: pid=4131373: Tue Nov 26 21:11:02 2024 00:30:11.642 read: IOPS=36, BW=146KiB/s (150kB/s)(148KiB/1011msec) 00:30:11.642 slat (nsec): min=7450, max=39166, avg=19501.16, stdev=8406.08 00:30:11.642 clat (usec): min=344, max=41366, avg=23452.61, stdev=20334.36 00:30:11.642 lat (usec): min=365, max=41398, avg=23472.11, stdev=20334.43 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 392], 20.00th=[ 441], 00:30:11.642 | 30.00th=[ 494], 40.00th=[ 578], 50.00th=[40633], 60.00th=[41157], 00:30:11.642 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:11.642 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:11.642 | 99.99th=[41157] 00:30:11.642 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:30:11.642 slat (nsec): min=7310, max=69600, avg=15419.23, stdev=8715.31 00:30:11.642 clat (usec): min=181, max=508, avg=256.40, stdev=46.88 00:30:11.642 lat (usec): min=189, max=528, avg=271.82, stdev=49.61 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 229], 00:30:11.642 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:30:11.642 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 367], 00:30:11.642 | 99.00th=[ 449], 99.50th=[ 490], 99.90th=[ 510], 99.95th=[ 510], 00:30:11.642 | 99.99th=[ 510] 00:30:11.642 bw ( KiB/s): min= 4096, max= 4096, per=23.69%, avg=4096.00, stdev= 0.00, samples=1 00:30:11.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:11.642 lat (usec) : 250=51.91%, 500=43.35%, 750=0.73%, 1000=0.18% 00:30:11.642 lat (msec) : 50=3.83% 00:30:11.642 cpu : usr=0.69%, sys=0.99%, ctx=549, majf=0, minf=1 00:30:11.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.642 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:11.642 job2: (groupid=0, jobs=1): err= 0: pid=4131374: Tue Nov 26 21:11:02 2024 00:30:11.642 read: IOPS=1060, BW=4242KiB/s (4344kB/s)(4276KiB/1008msec) 00:30:11.642 slat (nsec): min=4811, max=74249, avg=18731.71, stdev=11905.03 00:30:11.642 clat (usec): min=287, max=41945, avg=542.82, stdev=2800.14 00:30:11.642 lat (usec): min=292, max=41959, avg=561.55, stdev=2799.98 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:30:11.642 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 355], 00:30:11.642 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 433], 00:30:11.642 | 99.00th=[ 502], 99.50th=[ 840], 99.90th=[41681], 99.95th=[42206], 00:30:11.642 | 99.99th=[42206] 00:30:11.642 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:30:11.642 slat (nsec): min=6633, max=60478, avg=15795.03, stdev=7727.32 00:30:11.642 clat (usec): min=190, max=4132, avg=241.55, stdev=105.82 00:30:11.642 lat (usec): min=199, max=4156, avg=257.34, stdev=106.55 00:30:11.642 clat percentiles (usec): 00:30:11.642 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:30:11.642 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:30:11.643 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 302], 00:30:11.643 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 486], 99.95th=[ 4146], 00:30:11.643 | 99.99th=[ 4146] 00:30:11.643 bw ( KiB/s): min= 4096, max= 8192, per=35.54%, avg=6144.00, stdev=2896.31, samples=2 00:30:11.643 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:30:11.643 lat (usec) : 250=44.72%, 500=54.82%, 750=0.19%, 1000=0.04% 00:30:11.643 lat (msec) : 10=0.04%, 50=0.19% 00:30:11.643 cpu : usr=1.79%, sys=5.06%, ctx=2606, majf=0, minf=1 00:30:11.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.643 issued rwts: total=1069,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:11.643 job3: (groupid=0, jobs=1): err= 0: pid=4131375: Tue Nov 26 21:11:02 2024 00:30:11.643 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:30:11.643 slat (nsec): min=8023, max=34936, avg=14899.86, stdev=4988.99 00:30:11.643 clat (usec): min=40903, max=41404, avg=40998.99, stdev=96.57 00:30:11.643 lat (usec): min=40938, max=41412, avg=41013.89, stdev=94.22 00:30:11.643 clat percentiles (usec): 00:30:11.643 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:11.643 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:11.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:11.643 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:11.643 | 99.99th=[41157] 00:30:11.643 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:30:11.643 slat (nsec): min=7932, max=63476, avg=16628.90, stdev=8313.41 00:30:11.643 clat (usec): min=191, max=397, avg=238.08, stdev=28.36 00:30:11.643 lat (usec): min=200, max=420, avg=254.71, stdev=30.27 00:30:11.643 clat percentiles (usec): 00:30:11.643 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:30:11.643 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:30:11.643 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 293], 00:30:11.643 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 396], 00:30:11.643 | 99.99th=[ 396] 00:30:11.643 bw ( KiB/s): min= 4096, max= 4096, per=23.69%, avg=4096.00, stdev= 0.00, samples=1 00:30:11.643 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:11.643 lat (usec) : 250=69.29%, 500=26.59% 00:30:11.643 lat (msec) : 50=4.12% 00:30:11.643 cpu : usr=0.58%, sys=0.97%, ctx=535, majf=0, minf=1 00:30:11.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.643 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:11.643 00:30:11.643 Run status group 0 (all jobs): 00:30:11.643 READ: bw=10.0MiB/s (10.5MB/s), 84.9KiB/s-6138KiB/s (87.0kB/s-6285kB/s), io=10.4MiB (10.9MB), run=1001-1036msec 00:30:11.643 WRITE: bw=16.9MiB/s (17.7MB/s), 1977KiB/s-7664KiB/s (2024kB/s-7848kB/s), io=17.5MiB (18.3MB), run=1001-1036msec 00:30:11.643 00:30:11.643 Disk stats (read/write): 00:30:11.643 nvme0n1: ios=1406/1536, merge=0/0, ticks=1252/330, in_queue=1582, util=85.97% 00:30:11.643 nvme0n2: ios=83/512, merge=0/0, ticks=770/127, in_queue=897, util=91.28% 00:30:11.643 nvme0n3: ios=1088/1536, merge=0/0, ticks=1300/344, in_queue=1644, util=93.56% 00:30:11.643 nvme0n4: ios=42/512, merge=0/0, ticks=1610/106, in_queue=1716, util=94.14% 00:30:11.643 21:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:11.643 [global] 00:30:11.643 thread=1 00:30:11.643 invalidate=1 00:30:11.643 rw=write 00:30:11.643 time_based=1 00:30:11.643 runtime=1 00:30:11.643 ioengine=libaio 00:30:11.643 direct=1 00:30:11.643 bs=4096 00:30:11.643 iodepth=128 00:30:11.643 norandommap=0 00:30:11.643 numjobs=1 00:30:11.643 00:30:11.643 verify_dump=1 00:30:11.643 verify_backlog=512 00:30:11.643 verify_state_save=0 00:30:11.643 do_verify=1 00:30:11.643 verify=crc32c-intel 00:30:11.643 [job0] 00:30:11.643 filename=/dev/nvme0n1 00:30:11.643 [job1] 00:30:11.643 filename=/dev/nvme0n2 00:30:11.643 [job2] 00:30:11.643 filename=/dev/nvme0n3 00:30:11.643 [job3] 00:30:11.643 filename=/dev/nvme0n4 00:30:11.643 Could not set queue depth (nvme0n1) 00:30:11.643 Could not set queue depth (nvme0n2) 00:30:11.643 Could not set queue depth (nvme0n3) 00:30:11.643 Could not set queue depth (nvme0n4) 00:30:11.901 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:11.901 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:11.901 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:11.901 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:11.901 fio-3.35 00:30:11.901 Starting 4 threads 00:30:13.275 00:30:13.275 job0: (groupid=0, jobs=1): err= 0: pid=4131607: Tue Nov 26 21:11:03 2024 00:30:13.275 read: IOPS=1713, BW=6855KiB/s (7019kB/s)(6944KiB/1013msec) 00:30:13.275 slat (usec): min=3, max=30554, avg=324.18, stdev=2143.46 00:30:13.275 clat (usec): min=1462, max=93767, avg=42295.24, stdev=26217.19 00:30:13.275 lat (usec): min=9134, max=96729, avg=42619.41, stdev=26405.42 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[10028], 5.00th=[11207], 10.00th=[11338], 20.00th=[11863], 00:30:13.275 | 30.00th=[13698], 40.00th=[22938], 50.00th=[48497], 60.00th=[53740], 00:30:13.275 | 70.00th=[63701], 80.00th=[67634], 90.00th=[76022], 95.00th=[82314], 00:30:13.275 | 99.00th=[89654], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:30:13.275 | 99.99th=[93848] 00:30:13.275 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:30:13.275 slat (usec): min=4, max=57614, avg=207.03, stdev=1721.58 00:30:13.275 clat (usec): min=8443, max=76305, avg=20382.79, stdev=11732.56 00:30:13.275 lat (msec): min=8, max=109, avg=20.59, stdev=11.97 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11600], 20.00th=[11863], 00:30:13.275 | 30.00th=[11994], 40.00th=[12125], 50.00th=[22152], 60.00th=[23462], 00:30:13.275 | 70.00th=[23987], 80.00th=[24249], 90.00th=[28443], 95.00th=[36439], 00:30:13.275 | 99.00th=[73925], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:30:13.275 | 99.99th=[76022] 00:30:13.275 bw ( KiB/s): min= 8192, max= 8192, per=12.46%, avg=8192.00, stdev= 0.00, samples=2 00:30:13.275 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:30:13.275 lat (msec) : 2=0.03%, 10=1.16%, 20=42.39%, 50=33.83%, 100=22.60% 00:30:13.275 cpu : usr=3.26%, sys=4.35%, ctx=221, majf=0, minf=1 00:30:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:30:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.275 issued rwts: total=1736,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.275 job1: (groupid=0, jobs=1): err= 0: pid=4131610: Tue Nov 26 21:11:03 2024 00:30:13.275 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:30:13.275 slat (usec): min=2, max=14067, avg=88.21, stdev=570.99 00:30:13.275 clat (usec): min=6046, max=25151, avg=12300.00, stdev=2393.86 00:30:13.275 lat (usec): min=6083, max=25158, avg=12388.21, stdev=2416.50 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:30:13.275 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[12649], 00:30:13.275 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14615], 95.00th=[15533], 00:30:13.275 | 99.00th=[21365], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:30:13.275 | 99.99th=[25035] 00:30:13.275 write: IOPS=5411, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1005msec); 0 zone resets 00:30:13.275 slat (usec): min=2, max=12094, avg=86.53, stdev=556.33 00:30:13.275 clat (usec): min=1000, max=22135, avg=11884.59, stdev=1709.77 00:30:13.275 lat (usec): min=1019, max=23619, avg=11971.12, stdev=1786.08 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 7046], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:30:13.275 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:30:13.275 | 70.00th=[12649], 80.00th=[13304], 90.00th=[13829], 95.00th=[14746], 00:30:13.275 | 99.00th=[16450], 99.50th=[17433], 99.90th=[19792], 99.95th=[21103], 00:30:13.275 | 99.99th=[22152] 00:30:13.275 bw ( KiB/s): min=20808, max=21688, per=32.31%, avg=21248.00, stdev=622.25, samples=2 00:30:13.275 iops : min= 5202, max= 5422, avg=5312.00, stdev=155.56, samples=2 00:30:13.275 lat (msec) : 2=0.09%, 10=11.20%, 20=87.62%, 50=1.09% 00:30:13.275 cpu : usr=7.56%, sys=10.85%, ctx=329, majf=0, minf=1 00:30:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.275 issued rwts: total=5120,5439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.275 job2: (groupid=0, jobs=1): err= 0: pid=4131611: Tue Nov 26 21:11:03 2024 00:30:13.275 read: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1012msec) 00:30:13.275 slat (usec): min=3, max=13889, avg=130.07, stdev=867.97 00:30:13.275 clat (usec): min=1703, max=42706, avg=16321.01, stdev=4841.34 00:30:13.275 lat (usec): min=4935, max=42714, avg=16451.08, stdev=4887.73 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 6128], 5.00th=[10683], 10.00th=[13173], 20.00th=[13566], 00:30:13.275 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15664], 60.00th=[15926], 00:30:13.275 | 70.00th=[16319], 80.00th=[18220], 90.00th=[21365], 95.00th=[26084], 00:30:13.275 | 99.00th=[36963], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:30:13.275 | 99.99th=[42730] 00:30:13.275 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:30:13.275 slat (usec): min=3, max=12790, avg=117.21, stdev=670.05 00:30:13.275 clat (usec): min=4462, max=42719, avg=16477.63, stdev=5093.37 00:30:13.275 lat (usec): min=4480, max=42727, avg=16594.84, stdev=5134.66 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[10945], 20.00th=[11994], 00:30:13.275 | 30.00th=[12911], 40.00th=[14091], 50.00th=[15664], 60.00th=[16712], 00:30:13.275 | 70.00th=[19530], 80.00th=[22152], 90.00th=[23987], 95.00th=[24249], 00:30:13.275 | 99.00th=[27657], 99.50th=[28967], 99.90th=[30802], 99.95th=[38011], 00:30:13.275 | 99.99th=[42730] 00:30:13.275 bw ( KiB/s): min=16344, max=16408, per=24.90%, avg=16376.00, stdev=45.25, samples=2 00:30:13.275 iops : min= 4086, max= 4102, avg=4094.00, stdev=11.31, samples=2 00:30:13.275 lat (msec) : 2=0.01%, 10=5.71%, 20=73.78%, 50=20.50% 00:30:13.275 cpu : usr=5.64%, sys=9.50%, ctx=378, majf=0, minf=1 00:30:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.275 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.275 job3: (groupid=0, jobs=1): err= 0: pid=4131612: Tue Nov 26 21:11:03 2024 00:30:13.275 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:30:13.275 slat (usec): min=3, max=3337, avg=98.57, stdev=418.79 00:30:13.275 clat (usec): min=9780, max=17052, avg=13098.18, stdev=1233.37 00:30:13.275 lat (usec): min=10109, max=17060, avg=13196.75, stdev=1219.78 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11600], 20.00th=[11994], 00:30:13.275 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:30:13.275 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14746], 95.00th=[15401], 00:30:13.275 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:30:13.275 | 99.99th=[17171] 00:30:13.275 write: IOPS=5062, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:30:13.275 slat (usec): min=3, max=19230, avg=95.02, stdev=478.77 00:30:13.275 clat (usec): min=1354, max=40505, avg=13039.59, stdev=3873.76 00:30:13.275 lat (usec): min=1365, max=40522, avg=13134.61, stdev=3891.17 00:30:13.275 clat percentiles (usec): 00:30:13.275 | 1.00th=[ 5014], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11469], 00:30:13.275 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:30:13.275 | 70.00th=[13173], 80.00th=[13960], 90.00th=[14746], 95.00th=[16450], 00:30:13.275 | 99.00th=[34866], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:30:13.276 | 99.99th=[40633] 00:30:13.276 bw ( KiB/s): min=19712, max=19856, per=30.08%, avg=19784.00, stdev=101.82, samples=2 00:30:13.276 iops : min= 4928, max= 4964, avg=4946.00, stdev=25.46, samples=2 00:30:13.276 lat (msec) : 2=0.14%, 10=2.57%, 20=95.32%, 50=1.96% 00:30:13.276 cpu : usr=8.49%, sys=12.69%, ctx=666, majf=0, minf=1 00:30:13.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.276 issued rwts: total=4608,5073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.276 00:30:13.276 Run status group 0 (all jobs): 00:30:13.276 READ: bw=58.5MiB/s (61.4MB/s), 6855KiB/s-19.9MiB/s (7019kB/s-20.9MB/s), io=59.3MiB (62.2MB), run=1002-1013msec 00:30:13.276 WRITE: bw=64.2MiB/s (67.3MB/s), 8087KiB/s-21.1MiB/s (8281kB/s-22.2MB/s), io=65.1MiB (68.2MB), run=1002-1013msec 00:30:13.276 00:30:13.276 Disk stats (read/write): 00:30:13.276 nvme0n1: ios=1589/1551, merge=0/0, ticks=22208/7736, in_queue=29944, util=91.78% 00:30:13.276 nvme0n2: ios=4396/4608, merge=0/0, ticks=28693/28149, in_queue=56842, util=91.17% 00:30:13.276 nvme0n3: ios=3193/3584, merge=0/0, ticks=39513/43488, in_queue=83001, util=95.42% 00:30:13.276 nvme0n4: ios=4094/4096, merge=0/0, ticks=13884/14373, in_queue=28257, util=98.53% 00:30:13.276 21:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:13.276 [global] 00:30:13.276 thread=1 00:30:13.276 invalidate=1 00:30:13.276 rw=randwrite 00:30:13.276 time_based=1 00:30:13.276 runtime=1 00:30:13.276 ioengine=libaio 00:30:13.276 direct=1 00:30:13.276 bs=4096 00:30:13.276 iodepth=128 00:30:13.276 norandommap=0 00:30:13.276 numjobs=1 00:30:13.276 00:30:13.276 verify_dump=1 00:30:13.276 verify_backlog=512 00:30:13.276 verify_state_save=0 00:30:13.276 do_verify=1 00:30:13.276 verify=crc32c-intel 00:30:13.276 [job0] 00:30:13.276 filename=/dev/nvme0n1 00:30:13.276 [job1] 00:30:13.276 filename=/dev/nvme0n2 00:30:13.276 [job2] 00:30:13.276 filename=/dev/nvme0n3 00:30:13.276 [job3] 00:30:13.276 filename=/dev/nvme0n4 00:30:13.276 Could not set queue depth (nvme0n1) 00:30:13.276 Could not set queue depth (nvme0n2) 00:30:13.276 Could not set queue depth (nvme0n3) 00:30:13.276 Could not set queue depth (nvme0n4) 00:30:13.276 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:13.276 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:13.276 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:13.276 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:13.276 fio-3.35 00:30:13.276 Starting 4 threads 00:30:14.646 00:30:14.646 job0: (groupid=0, jobs=1): err= 0: pid=4131835: Tue Nov 26 21:11:05 2024 00:30:14.646 read: IOPS=2015, BW=8063KiB/s (8257kB/s)(8192KiB/1016msec) 00:30:14.646 slat (usec): min=3, max=18673, avg=177.64, stdev=1286.97 00:30:14.646 clat (usec): min=7990, max=78654, avg=22021.55, stdev=10470.70 00:30:14.646 lat (usec): min=7994, max=78660, avg=22199.20, stdev=10572.18 00:30:14.646 clat percentiles (usec): 00:30:14.646 | 1.00th=[ 8029], 5.00th=[11469], 10.00th=[12256], 20.00th=[15008], 00:30:14.646 | 30.00th=[17957], 40.00th=[19268], 50.00th=[20317], 60.00th=[21365], 00:30:14.646 | 70.00th=[23462], 80.00th=[25822], 90.00th=[33162], 95.00th=[36439], 00:30:14.646 | 99.00th=[72877], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:30:14.646 | 99.99th=[79168] 00:30:14.646 write: IOPS=2202, BW=8811KiB/s (9022kB/s)(8952KiB/1016msec); 0 zone resets 00:30:14.646 slat (usec): min=4, max=26901, avg=278.43, stdev=1669.05 00:30:14.646 clat (msec): min=4, max=102, avg=37.39, stdev=20.20 00:30:14.646 lat (msec): min=4, max=102, avg=37.66, stdev=20.32 00:30:14.646 clat percentiles (msec): 00:30:14.647 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 18], 20.00th=[ 21], 00:30:14.647 | 30.00th=[ 23], 40.00th=[ 27], 50.00th=[ 30], 60.00th=[ 35], 00:30:14.647 | 70.00th=[ 51], 80.00th=[ 57], 90.00th=[ 64], 95.00th=[ 78], 00:30:14.647 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:30:14.647 | 99.99th=[ 103] 00:30:14.647 bw ( KiB/s): min= 6336, max=10544, per=14.91%, avg=8440.00, stdev=2975.51, samples=2 00:30:14.647 iops : min= 1584, max= 2636, avg=2110.00, stdev=743.88, samples=2 00:30:14.647 lat (msec) : 10=2.40%, 20=29.70%, 50=50.70%, 100=16.85%, 250=0.35% 00:30:14.647 cpu : usr=1.67%, sys=4.53%, ctx=199, majf=0, minf=1 00:30:14.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:30:14.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.647 issued rwts: total=2048,2238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.647 job1: (groupid=0, jobs=1): err= 0: pid=4131836: Tue Nov 26 21:11:05 2024 00:30:14.647 read: IOPS=4345, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1002msec) 00:30:14.647 slat (usec): min=2, max=21941, avg=117.74, stdev=1007.03 00:30:14.647 clat (usec): min=688, max=61839, avg=15074.08, stdev=9893.39 00:30:14.647 lat (usec): min=3056, max=61854, avg=15191.83, stdev=9979.48 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9110], 00:30:14.647 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10945], 60.00th=[11731], 00:30:14.647 | 70.00th=[12780], 80.00th=[20317], 90.00th=[31851], 95.00th=[37487], 00:30:14.647 | 99.00th=[44303], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:30:14.647 | 99.99th=[61604] 00:30:14.647 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:30:14.647 slat (usec): min=3, max=38106, avg=85.38, stdev=828.64 00:30:14.647 clat (usec): min=465, max=49550, avg=13345.24, stdev=8797.31 00:30:14.647 lat (usec): min=498, max=65833, avg=13430.62, stdev=8844.76 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 1975], 5.00th=[ 7111], 10.00th=[ 8094], 20.00th=[ 9241], 00:30:14.647 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10290], 60.00th=[10814], 00:30:14.647 | 70.00th=[11207], 80.00th=[13829], 90.00th=[26870], 95.00th=[31589], 00:30:14.647 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:30:14.647 | 99.99th=[49546] 00:30:14.647 bw ( KiB/s): min=17024, max=19840, per=32.56%, avg=18432.00, stdev=1991.21, samples=2 00:30:14.647 iops : min= 4256, max= 4960, avg=4608.00, stdev=497.80, samples=2 00:30:14.647 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.02% 00:30:14.647 lat (msec) : 2=0.55%, 4=1.75%, 10=34.33%, 20=45.12%, 50=17.78% 00:30:14.647 lat (msec) : 100=0.37% 00:30:14.647 cpu : usr=3.20%, sys=6.19%, ctx=332, majf=0, minf=1 00:30:14.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:14.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.647 issued rwts: total=4354,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.647 job2: (groupid=0, jobs=1): err= 0: pid=4131837: Tue Nov 26 21:11:05 2024 00:30:14.647 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:30:14.647 slat (usec): min=2, max=27470, avg=127.61, stdev=1198.04 00:30:14.647 clat (usec): min=2989, max=74807, avg=20433.45, stdev=11731.82 00:30:14.647 lat (usec): min=2993, max=74816, avg=20561.06, stdev=11790.40 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 5080], 5.00th=[ 7898], 10.00th=[ 9896], 20.00th=[11600], 00:30:14.647 | 30.00th=[14615], 40.00th=[16057], 50.00th=[18220], 60.00th=[19268], 00:30:14.647 | 70.00th=[21103], 80.00th=[25297], 90.00th=[37487], 95.00th=[43254], 00:30:14.647 | 99.00th=[59507], 99.50th=[61080], 99.90th=[74974], 99.95th=[74974], 00:30:14.647 | 99.99th=[74974] 00:30:14.647 write: IOPS=3406, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1009msec); 0 zone resets 00:30:14.647 slat (usec): min=3, max=27094, avg=124.61, stdev=1229.99 00:30:14.647 clat (usec): min=1037, max=53933, avg=19009.76, stdev=10331.33 00:30:14.647 lat (usec): min=1055, max=53977, avg=19134.37, stdev=10409.80 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 2008], 5.00th=[ 5342], 10.00th=[ 6915], 20.00th=[11600], 00:30:14.647 | 30.00th=[12125], 40.00th=[13698], 50.00th=[16188], 60.00th=[18744], 00:30:14.647 | 70.00th=[23462], 80.00th=[27657], 90.00th=[33424], 95.00th=[43779], 00:30:14.647 | 99.00th=[45351], 99.50th=[45351], 99.90th=[49546], 99.95th=[50594], 00:30:14.647 | 99.99th=[53740] 00:30:14.647 bw ( KiB/s): min= 9448, max=17024, per=23.38%, avg=13236.00, stdev=5357.04, samples=2 00:30:14.647 iops : min= 2362, max= 4256, avg=3309.00, stdev=1339.26, samples=2 00:30:14.647 lat (msec) : 2=0.41%, 4=0.34%, 10=10.42%, 20=52.33%, 50=34.74% 00:30:14.647 lat (msec) : 100=1.77% 00:30:14.647 cpu : usr=1.88%, sys=3.37%, ctx=177, majf=0, minf=1 00:30:14.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:14.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.647 issued rwts: total=3072,3437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.647 job3: (groupid=0, jobs=1): err= 0: pid=4131838: Tue Nov 26 21:11:05 2024 00:30:14.647 read: IOPS=3890, BW=15.2MiB/s (15.9MB/s)(15.4MiB/1011msec) 00:30:14.647 slat (usec): min=2, max=18003, avg=108.91, stdev=912.64 00:30:14.647 clat (usec): min=2468, max=56563, avg=14512.13, stdev=7597.91 00:30:14.647 lat (usec): min=5322, max=66080, avg=14621.03, stdev=7658.41 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 5866], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9372], 00:30:14.647 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12387], 60.00th=[14222], 00:30:14.647 | 70.00th=[15664], 80.00th=[17695], 90.00th=[20841], 95.00th=[31065], 00:30:14.647 | 99.00th=[43779], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:30:14.647 | 99.99th=[56361] 00:30:14.647 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:30:14.647 slat (usec): min=3, max=36080, avg=118.32, stdev=1201.36 00:30:14.647 clat (usec): min=966, max=77466, avg=17375.89, stdev=12468.53 00:30:14.647 lat (usec): min=995, max=77511, avg=17494.21, stdev=12539.84 00:30:14.647 clat percentiles (usec): 00:30:14.647 | 1.00th=[ 3228], 5.00th=[ 5997], 10.00th=[ 7242], 20.00th=[ 8979], 00:30:14.647 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12780], 60.00th=[15270], 00:30:14.647 | 70.00th=[17171], 80.00th=[22676], 90.00th=[35390], 95.00th=[42206], 00:30:14.647 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:30:14.647 | 99.99th=[77071] 00:30:14.647 bw ( KiB/s): min=16384, max=16384, per=28.94%, avg=16384.00, stdev= 0.00, samples=2 00:30:14.647 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:30:14.647 lat (usec) : 1000=0.01% 00:30:14.647 lat (msec) : 2=0.22%, 4=0.46%, 10=25.08%, 20=55.56%, 50=16.53% 00:30:14.647 lat (msec) : 100=2.13% 00:30:14.647 cpu : usr=2.48%, sys=4.26%, ctx=231, majf=0, minf=1 00:30:14.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:14.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.647 issued rwts: total=3933,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.647 00:30:14.647 Run status group 0 (all jobs): 00:30:14.647 READ: bw=51.5MiB/s (54.1MB/s), 8063KiB/s-17.0MiB/s (8257kB/s-17.8MB/s), io=52.4MiB (54.9MB), run=1002-1016msec 00:30:14.647 WRITE: bw=55.3MiB/s (58.0MB/s), 8811KiB/s-18.0MiB/s (9022kB/s-18.8MB/s), io=56.2MiB (58.9MB), run=1002-1016msec 00:30:14.647 00:30:14.647 Disk stats (read/write): 00:30:14.647 nvme0n1: ios=1579/1878, merge=0/0, ticks=35722/67991, in_queue=103713, util=97.29% 00:30:14.647 nvme0n2: ios=3634/4009, merge=0/0, ticks=38414/34238, in_queue=72652, util=95.33% 00:30:14.647 nvme0n3: ios=2606/2855, merge=0/0, ticks=42299/44418, in_queue=86717, util=96.36% 00:30:14.647 nvme0n4: ios=3211/3584, merge=0/0, ticks=35351/39195, in_queue=74546, util=98.32% 00:30:14.647 21:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:14.647 21:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4131973 00:30:14.647 21:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:14.647 21:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:14.647 [global] 00:30:14.647 thread=1 00:30:14.647 invalidate=1 00:30:14.647 rw=read 00:30:14.647 time_based=1 00:30:14.647 runtime=10 00:30:14.647 ioengine=libaio 00:30:14.647 direct=1 00:30:14.647 bs=4096 00:30:14.647 iodepth=1 00:30:14.647 norandommap=1 00:30:14.647 numjobs=1 00:30:14.647 00:30:14.647 [job0] 00:30:14.647 filename=/dev/nvme0n1 00:30:14.647 [job1] 00:30:14.647 filename=/dev/nvme0n2 00:30:14.647 [job2] 00:30:14.647 filename=/dev/nvme0n3 00:30:14.647 [job3] 00:30:14.647 filename=/dev/nvme0n4 00:30:14.647 Could not set queue depth (nvme0n1) 00:30:14.647 Could not set queue depth (nvme0n2) 00:30:14.647 Could not set queue depth (nvme0n3) 00:30:14.647 Could not set queue depth (nvme0n4) 00:30:14.647 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:14.647 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:14.647 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:14.647 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:14.647 fio-3.35 00:30:14.647 Starting 4 threads 00:30:17.922 21:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:17.922 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3362816, buflen=4096 00:30:17.922 fio: pid=4132070, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:17.922 21:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:18.179 21:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:18.179 21:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:18.179 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36696064, buflen=4096 00:30:18.179 fio: pid=4132069, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:18.482 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:18.482 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:18.482 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=20361216, buflen=4096 00:30:18.482 fio: pid=4132067, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:18.756 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:18.756 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:18.756 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16519168, buflen=4096 00:30:18.756 fio: pid=4132068, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:18.756 00:30:18.756 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4132067: Tue Nov 26 21:11:09 2024 00:30:18.756 read: IOPS=1395, BW=5581KiB/s (5715kB/s)(19.4MiB/3563msec) 00:30:18.756 slat (usec): min=5, max=29148, avg=25.05, stdev=548.42 00:30:18.756 clat (usec): min=233, max=42109, avg=683.98, stdev=3927.08 00:30:18.756 lat (usec): min=239, max=42117, avg=709.02, stdev=3964.23 00:30:18.756 clat percentiles (usec): 00:30:18.756 | 1.00th=[ 243], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:30:18.756 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 310], 00:30:18.757 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 351], 00:30:18.757 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:18.757 | 99.99th=[42206] 00:30:18.757 bw ( KiB/s): min= 1192, max=11288, per=30.91%, avg=6021.33, stdev=3683.63, samples=6 00:30:18.757 iops : min= 298, max= 2822, avg=1505.33, stdev=920.91, samples=6 00:30:18.757 lat (usec) : 250=1.91%, 500=96.86%, 750=0.28% 00:30:18.757 lat (msec) : 50=0.93% 00:30:18.757 cpu : usr=0.95%, sys=2.30%, ctx=4980, majf=0, minf=1 00:30:18.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:18.757 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4132068: Tue Nov 26 21:11:09 2024 00:30:18.757 read: IOPS=1045, BW=4183KiB/s (4283kB/s)(15.8MiB/3857msec) 00:30:18.757 slat (usec): min=4, max=17245, avg=28.13, stdev=472.78 00:30:18.757 clat (usec): min=245, max=42125, avg=919.02, stdev=4696.65 00:30:18.757 lat (usec): min=250, max=58360, avg=947.16, stdev=4756.04 00:30:18.757 clat percentiles (usec): 00:30:18.757 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:30:18.757 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 383], 00:30:18.757 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 586], 95.00th=[ 627], 00:30:18.757 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:18.757 | 99.99th=[42206] 00:30:18.757 bw ( KiB/s): min= 88, max= 7306, per=22.55%, avg=4393.43, stdev=2381.85, samples=7 00:30:18.757 iops : min= 22, max= 1826, avg=1098.29, stdev=595.36, samples=7 00:30:18.757 lat (usec) : 250=0.52%, 500=78.31%, 750=19.71%, 1000=0.10% 00:30:18.757 lat (msec) : 10=0.02%, 50=1.31% 00:30:18.757 cpu : usr=0.65%, sys=1.56%, ctx=4039, majf=0, minf=1 00:30:18.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 issued rwts: total=4034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:18.757 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4132069: Tue Nov 26 21:11:09 2024 00:30:18.757 read: IOPS=2762, BW=10.8MiB/s (11.3MB/s)(35.0MiB/3244msec) 00:30:18.757 slat (usec): min=4, max=892, avg=11.34, stdev=10.97 00:30:18.757 clat (usec): min=252, max=42354, avg=344.58, stdev=745.47 00:30:18.757 lat (usec): min=259, max=42372, avg=355.92, stdev=750.87 00:30:18.757 clat percentiles (usec): 00:30:18.757 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:30:18.757 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:30:18.757 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 465], 95.00th=[ 502], 00:30:18.757 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 652], 99.95th=[ 2147], 00:30:18.757 | 99.99th=[42206] 00:30:18.757 bw ( KiB/s): min=10320, max=13464, per=58.64%, avg=11424.00, stdev=1170.49, samples=6 00:30:18.757 iops : min= 2580, max= 3366, avg=2856.00, stdev=292.62, samples=6 00:30:18.757 lat (usec) : 500=95.00%, 750=4.92% 00:30:18.757 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.03% 00:30:18.757 cpu : usr=2.04%, sys=4.84%, ctx=8962, majf=0, minf=2 00:30:18.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 issued rwts: total=8960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:18.757 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4132070: Tue Nov 26 21:11:09 2024 00:30:18.757 read: IOPS=280, BW=1119KiB/s (1146kB/s)(3284KiB/2934msec) 00:30:18.757 slat (nsec): min=5075, max=44488, avg=13051.18, stdev=6756.83 00:30:18.757 clat (usec): min=304, max=41614, avg=3545.06, stdev=10831.61 00:30:18.757 lat (usec): min=310, max=41639, avg=3558.11, stdev=10834.70 00:30:18.757 clat percentiles (usec): 00:30:18.757 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 347], 00:30:18.757 | 30.00th=[ 367], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:30:18.757 | 70.00th=[ 429], 80.00th=[ 482], 90.00th=[ 611], 95.00th=[41157], 00:30:18.757 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:18.757 | 99.99th=[41681] 00:30:18.757 bw ( KiB/s): min= 96, max= 624, per=1.06%, avg=206.40, stdev=233.47, samples=5 00:30:18.757 iops : min= 24, max= 156, avg=51.60, stdev=58.37, samples=5 00:30:18.757 lat (usec) : 500=82.12%, 750=9.98% 00:30:18.757 lat (msec) : 50=7.79% 00:30:18.757 cpu : usr=0.14%, sys=0.41%, ctx=824, majf=0, minf=2 00:30:18.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:18.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.757 issued rwts: total=822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:18.757 00:30:18.757 Run status group 0 (all jobs): 00:30:18.757 READ: bw=19.0MiB/s (19.9MB/s), 1119KiB/s-10.8MiB/s (1146kB/s-11.3MB/s), io=73.4MiB (76.9MB), run=2934-3857msec 00:30:18.757 00:30:18.757 Disk stats (read/write): 00:30:18.757 nvme0n1: ios=5010/0, merge=0/0, ticks=3303/0, in_queue=3303, util=97.57% 00:30:18.757 nvme0n2: ios=4034/0, merge=0/0, ticks=3657/0, in_queue=3657, util=95.10% 00:30:18.757 nvme0n3: ios=8635/0, merge=0/0, ticks=2852/0, in_queue=2852, util=96.82% 00:30:18.757 nvme0n4: ios=703/0, merge=0/0, ticks=4160/0, in_queue=4160, util=99.05% 00:30:19.015 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:19.015 21:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:19.273 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:19.273 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:19.532 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:19.532 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:19.789 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:19.789 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:20.047 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:20.047 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4131973 00:30:20.047 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:20.047 21:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:20.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:20.305 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:20.305 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:20.305 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:20.306 nvmf hotplug test: fio failed as expected 00:30:20.306 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.563 rmmod nvme_tcp 00:30:20.563 rmmod nvme_fabrics 00:30:20.563 rmmod nvme_keyring 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:20.563 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4130074 ']' 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4130074 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4130074 ']' 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4130074 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.564 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4130074 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4130074' 00:30:20.822 killing process with pid 4130074 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4130074 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4130074 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.822 21:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.360 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.360 00:30:23.360 real 0m23.949s 00:30:23.360 user 1m6.378s 00:30:23.360 sys 0m10.933s 00:30:23.360 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.360 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.360 ************************************ 00:30:23.360 END TEST nvmf_fio_target 00:30:23.360 ************************************ 00:30:23.360 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:23.360 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.361 ************************************ 00:30:23.361 START TEST nvmf_bdevio 00:30:23.361 ************************************ 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:23.361 * Looking for test storage... 00:30:23.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:23.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.361 --rc genhtml_branch_coverage=1 00:30:23.361 --rc genhtml_function_coverage=1 00:30:23.361 --rc genhtml_legend=1 00:30:23.361 --rc geninfo_all_blocks=1 00:30:23.361 --rc geninfo_unexecuted_blocks=1 00:30:23.361 00:30:23.361 ' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:23.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.361 --rc genhtml_branch_coverage=1 00:30:23.361 --rc genhtml_function_coverage=1 00:30:23.361 --rc genhtml_legend=1 00:30:23.361 --rc geninfo_all_blocks=1 00:30:23.361 --rc geninfo_unexecuted_blocks=1 00:30:23.361 00:30:23.361 ' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:23.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.361 --rc genhtml_branch_coverage=1 00:30:23.361 --rc genhtml_function_coverage=1 00:30:23.361 --rc genhtml_legend=1 00:30:23.361 --rc geninfo_all_blocks=1 00:30:23.361 --rc geninfo_unexecuted_blocks=1 00:30:23.361 00:30:23.361 ' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:23.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.361 --rc genhtml_branch_coverage=1 00:30:23.361 --rc genhtml_function_coverage=1 00:30:23.361 --rc genhtml_legend=1 00:30:23.361 --rc geninfo_all_blocks=1 00:30:23.361 --rc geninfo_unexecuted_blocks=1 00:30:23.361 00:30:23.361 ' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.361 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.362 21:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:25.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:25.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:25.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:25.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:25.266 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.267 21:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:30:25.267 00:30:25.267 --- 10.0.0.2 ping statistics --- 00:30:25.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.267 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:30:25.267 00:30:25.267 --- 10.0.0.1 ping statistics --- 00:30:25.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.267 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4134811 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4134811 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4134811 ']' 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.267 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.267 [2024-11-26 21:11:16.167657] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:25.267 [2024-11-26 21:11:16.168709] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:30:25.267 [2024-11-26 21:11:16.168790] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.526 [2024-11-26 21:11:16.246938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:25.526 [2024-11-26 21:11:16.310364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.526 [2024-11-26 21:11:16.310432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.526 [2024-11-26 21:11:16.310459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.526 [2024-11-26 21:11:16.310473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.526 [2024-11-26 21:11:16.310484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.526 [2024-11-26 21:11:16.312180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:25.526 [2024-11-26 21:11:16.312237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:25.526 [2024-11-26 21:11:16.312268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:25.526 [2024-11-26 21:11:16.312271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.526 [2024-11-26 21:11:16.408723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:25.526 [2024-11-26 21:11:16.408912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:25.526 [2024-11-26 21:11:16.409207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:25.526 [2024-11-26 21:11:16.409900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:25.526 [2024-11-26 21:11:16.410174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.526 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.785 [2024-11-26 21:11:16.465030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.785 Malloc0 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.785 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:25.786 [2024-11-26 21:11:16.545298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.786 { 00:30:25.786 "params": { 00:30:25.786 "name": "Nvme$subsystem", 00:30:25.786 "trtype": "$TEST_TRANSPORT", 00:30:25.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.786 "adrfam": "ipv4", 00:30:25.786 "trsvcid": "$NVMF_PORT", 00:30:25.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.786 "hdgst": ${hdgst:-false}, 00:30:25.786 "ddgst": ${ddgst:-false} 00:30:25.786 }, 00:30:25.786 "method": "bdev_nvme_attach_controller" 00:30:25.786 } 00:30:25.786 EOF 00:30:25.786 )") 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:25.786 21:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:25.786 "params": { 00:30:25.786 "name": "Nvme1", 00:30:25.786 "trtype": "tcp", 00:30:25.786 "traddr": "10.0.0.2", 00:30:25.786 "adrfam": "ipv4", 00:30:25.786 "trsvcid": "4420", 00:30:25.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.786 "hdgst": false, 00:30:25.786 "ddgst": false 00:30:25.786 }, 00:30:25.786 "method": "bdev_nvme_attach_controller" 00:30:25.786 }' 00:30:25.786 [2024-11-26 21:11:16.595264] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:30:25.786 [2024-11-26 21:11:16.595352] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134840 ] 00:30:25.786 [2024-11-26 21:11:16.665757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.044 [2024-11-26 21:11:16.730705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.044 [2024-11-26 21:11:16.730733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.044 [2024-11-26 21:11:16.730737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.302 I/O targets: 00:30:26.302 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:26.302 00:30:26.302 00:30:26.302 CUnit - A unit testing framework for C - Version 2.1-3 00:30:26.302 http://cunit.sourceforge.net/ 00:30:26.302 00:30:26.302 00:30:26.302 Suite: bdevio tests on: Nvme1n1 00:30:26.302 Test: blockdev write read block ...passed 00:30:26.302 Test: blockdev write zeroes read block ...passed 00:30:26.302 Test: blockdev write zeroes read no split ...passed 00:30:26.302 Test: blockdev write zeroes read split ...passed 00:30:26.302 Test: blockdev write zeroes read split partial ...passed 00:30:26.302 Test: blockdev reset ...[2024-11-26 21:11:17.228562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:26.302 [2024-11-26 21:11:17.228682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4cb0 (9): Bad file descriptor 00:30:26.302 [2024-11-26 21:11:17.233312] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:26.302 passed 00:30:26.302 Test: blockdev write read 8 blocks ...passed 00:30:26.302 Test: blockdev write read size > 128k ...passed 00:30:26.302 Test: blockdev write read invalid size ...passed 00:30:26.560 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:26.560 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:26.560 Test: blockdev write read max offset ...passed 00:30:26.560 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:26.560 Test: blockdev writev readv 8 blocks ...passed 00:30:26.560 Test: blockdev writev readv 30 x 1block ...passed 00:30:26.560 Test: blockdev writev readv block ...passed 00:30:26.560 Test: blockdev writev readv size > 128k ...passed 00:30:26.560 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:26.560 Test: blockdev comparev and writev ...[2024-11-26 21:11:17.406511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.406547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.406579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.406597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.407040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.407066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.407089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.407112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.407545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.407575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.407596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.407635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.408083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.408109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.408136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:26.560 [2024-11-26 21:11:17.408153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:26.560 passed 00:30:26.560 Test: blockdev nvme passthru rw ...passed 00:30:26.560 Test: blockdev nvme passthru vendor specific ...[2024-11-26 21:11:17.490018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.560 [2024-11-26 21:11:17.490046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.490217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.560 [2024-11-26 21:11:17.490241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.490414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.560 [2024-11-26 21:11:17.490438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:26.560 [2024-11-26 21:11:17.490616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:26.560 [2024-11-26 21:11:17.490639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:26.560 passed 00:30:26.819 Test: blockdev nvme admin passthru ...passed 00:30:26.819 Test: blockdev copy ...passed 00:30:26.819 00:30:26.819 Run Summary: Type Total Ran Passed Failed Inactive 00:30:26.819 suites 1 1 n/a 0 0 00:30:26.819 tests 23 23 23 0 0 00:30:26.819 asserts 152 152 152 0 n/a 00:30:26.819 00:30:26.819 Elapsed time = 1.026 seconds 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.819 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.819 rmmod nvme_tcp 00:30:27.077 rmmod nvme_fabrics 00:30:27.077 rmmod nvme_keyring 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4134811 ']' 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4134811 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4134811 ']' 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4134811 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134811 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134811' 00:30:27.077 killing process with pid 4134811 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4134811 00:30:27.077 21:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4134811 00:30:27.336 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.336 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.336 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.337 21:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.239 00:30:29.239 real 0m6.315s 00:30:29.239 user 0m8.461s 00:30:29.239 sys 0m2.464s 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:29.239 ************************************ 00:30:29.239 END TEST nvmf_bdevio 00:30:29.239 ************************************ 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:29.239 00:30:29.239 real 3m55.895s 00:30:29.239 user 8m54.216s 00:30:29.239 sys 1m24.760s 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.239 21:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:29.240 ************************************ 00:30:29.240 END TEST nvmf_target_core_interrupt_mode 00:30:29.240 ************************************ 00:30:29.498 21:11:20 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:29.498 21:11:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:29.498 21:11:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.498 21:11:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.498 ************************************ 00:30:29.498 START TEST nvmf_interrupt 00:30:29.498 ************************************ 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:29.498 * Looking for test storage... 00:30:29.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.498 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.499 --rc genhtml_branch_coverage=1 00:30:29.499 --rc genhtml_function_coverage=1 00:30:29.499 --rc genhtml_legend=1 00:30:29.499 --rc geninfo_all_blocks=1 00:30:29.499 --rc geninfo_unexecuted_blocks=1 00:30:29.499 00:30:29.499 ' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.499 --rc genhtml_branch_coverage=1 00:30:29.499 --rc genhtml_function_coverage=1 00:30:29.499 --rc genhtml_legend=1 00:30:29.499 --rc geninfo_all_blocks=1 00:30:29.499 --rc geninfo_unexecuted_blocks=1 00:30:29.499 00:30:29.499 ' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.499 --rc genhtml_branch_coverage=1 00:30:29.499 --rc genhtml_function_coverage=1 00:30:29.499 --rc genhtml_legend=1 00:30:29.499 --rc geninfo_all_blocks=1 00:30:29.499 --rc geninfo_unexecuted_blocks=1 00:30:29.499 00:30:29.499 ' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:29.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.499 --rc genhtml_branch_coverage=1 00:30:29.499 --rc genhtml_function_coverage=1 00:30:29.499 --rc genhtml_legend=1 00:30:29.499 --rc geninfo_all_blocks=1 00:30:29.499 --rc geninfo_unexecuted_blocks=1 00:30:29.499 00:30:29.499 ' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.499 21:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:32.037 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.037 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:32.037 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:32.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:32.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:30:32.038 00:30:32.038 --- 10.0.0.2 ping statistics --- 00:30:32.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.038 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:32.038 00:30:32.038 --- 10.0.0.1 ping statistics --- 00:30:32.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.038 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4136927 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4136927 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 4136927 ']' 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.038 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.038 [2024-11-26 21:11:22.697130] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.038 [2024-11-26 21:11:22.698180] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:30:32.038 [2024-11-26 21:11:22.698249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.038 [2024-11-26 21:11:22.779790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:32.038 [2024-11-26 21:11:22.843299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.038 [2024-11-26 21:11:22.843354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.038 [2024-11-26 21:11:22.843380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.038 [2024-11-26 21:11:22.843394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.039 [2024-11-26 21:11:22.843406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.039 [2024-11-26 21:11:22.844941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.039 [2024-11-26 21:11:22.844959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.039 [2024-11-26 21:11:22.943509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.039 [2024-11-26 21:11:22.943562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.039 [2024-11-26 21:11:22.943812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:32.039 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.039 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:32.039 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.039 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.039 21:11:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 21:11:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.298 21:11:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:32.298 21:11:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:32.298 21:11:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:32.298 21:11:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:32.298 5000+0 records in 00:30:32.298 5000+0 records out 00:30:32.298 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0138489 s, 739 MB/s 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 AIO0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 [2024-11-26 21:11:23.037697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:32.298 [2024-11-26 21:11:23.061923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4136927 0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 0 idle 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136927 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.29 reactor_0' 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136927 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.29 reactor_0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4136927 1 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 1 idle 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:32.298 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136937 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.00 reactor_1' 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136937 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.00 reactor_1 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:32.556 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4137094 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4136927 0 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4136927 0 busy 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:32.557 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136927 root 20 0 128.2g 49152 35712 R 81.2 0.1 0:00.42 reactor_0' 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136927 root 20 0 128.2g 49152 35712 R 81.2 0.1 0:00.42 reactor_0 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=81.2 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=81 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4136927 1 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4136927 1 busy 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:32.815 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136937 root 20 0 128.2g 49152 35712 R 99.9 0.1 0:00.22 reactor_1' 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136937 root 20 0 128.2g 49152 35712 R 99.9 0.1 0:00.22 reactor_1 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:32.816 21:11:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4137094 00:30:42.788 Initializing NVMe Controllers 00:30:42.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.788 Controller IO queue size 256, less than required. 00:30:42.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:42.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:42.788 Initialization complete. Launching workers. 00:30:42.788 ======================================================== 00:30:42.788 Latency(us) 00:30:42.788 Device Information : IOPS MiB/s Average min max 00:30:42.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13847.19 54.09 18500.60 4448.23 58713.32 00:30:42.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13668.40 53.39 18741.86 4525.36 23127.81 00:30:42.788 ======================================================== 00:30:42.788 Total : 27515.59 107.48 18620.45 4448.23 58713.32 00:30:42.788 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4136927 0 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 0 idle 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:42.788 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136927 root 20 0 128.2g 49152 35712 S 0.0 0.1 0:20.25 reactor_0' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136927 root 20 0 128.2g 49152 35712 S 0.0 0.1 0:20.25 reactor_0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4136927 1 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 1 idle 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136937 root 20 0 128.2g 49152 35712 S 0.0 0.1 0:09.98 reactor_1' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136937 root 20 0 128.2g 49152 35712 S 0.0 0.1 0:09.98 reactor_1 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:43.047 21:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:43.306 21:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:43.306 21:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:43.306 21:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:43.306 21:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:43.306 21:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4136927 0 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 0 idle 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:45.840 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136927 root 20 0 128.2g 61440 35712 S 0.0 0.1 0:20.34 reactor_0' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136927 root 20 0 128.2g 61440 35712 S 0.0 0.1 0:20.34 reactor_0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4136927 1 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4136927 1 idle 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4136927 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4136927 -w 256 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4136937 root 20 0 128.2g 61440 35712 S 0.0 0.1 0:10.01 reactor_1' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4136937 root 20 0 128.2g 61440 35712 S 0.0 0.1 0:10.01 reactor_1 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:45.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.841 rmmod nvme_tcp 00:30:45.841 rmmod nvme_fabrics 00:30:45.841 rmmod nvme_keyring 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4136927 ']' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4136927 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 4136927 ']' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 4136927 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.841 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4136927 00:30:46.100 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:46.100 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:46.100 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4136927' 00:30:46.100 killing process with pid 4136927 00:30:46.100 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 4136927 00:30:46.100 21:11:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 4136927 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.360 21:11:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.266 21:11:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.266 00:30:48.266 real 0m18.887s 00:30:48.266 user 0m37.766s 00:30:48.266 sys 0m6.198s 00:30:48.266 21:11:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.266 21:11:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.266 ************************************ 00:30:48.266 END TEST nvmf_interrupt 00:30:48.266 ************************************ 00:30:48.266 00:30:48.266 real 25m17.115s 00:30:48.266 user 59m10.480s 00:30:48.266 sys 6m41.954s 00:30:48.266 21:11:39 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.266 21:11:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.266 ************************************ 00:30:48.266 END TEST nvmf_tcp 00:30:48.266 ************************************ 00:30:48.266 21:11:39 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:48.266 21:11:39 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:48.266 21:11:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:48.266 21:11:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.266 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:30:48.266 ************************************ 00:30:48.266 START TEST spdkcli_nvmf_tcp 00:30:48.266 ************************************ 00:30:48.266 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:48.525 * Looking for test storage... 00:30:48.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.525 --rc genhtml_branch_coverage=1 00:30:48.525 --rc genhtml_function_coverage=1 00:30:48.525 --rc genhtml_legend=1 00:30:48.525 --rc geninfo_all_blocks=1 00:30:48.525 --rc geninfo_unexecuted_blocks=1 00:30:48.525 00:30:48.525 ' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.525 --rc genhtml_branch_coverage=1 00:30:48.525 --rc genhtml_function_coverage=1 00:30:48.525 --rc genhtml_legend=1 00:30:48.525 --rc geninfo_all_blocks=1 00:30:48.525 --rc geninfo_unexecuted_blocks=1 00:30:48.525 00:30:48.525 ' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.525 --rc genhtml_branch_coverage=1 00:30:48.525 --rc genhtml_function_coverage=1 00:30:48.525 --rc genhtml_legend=1 00:30:48.525 --rc geninfo_all_blocks=1 00:30:48.525 --rc geninfo_unexecuted_blocks=1 00:30:48.525 00:30:48.525 ' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.525 --rc genhtml_branch_coverage=1 00:30:48.525 --rc genhtml_function_coverage=1 00:30:48.525 --rc genhtml_legend=1 00:30:48.525 --rc geninfo_all_blocks=1 00:30:48.525 --rc geninfo_unexecuted_blocks=1 00:30:48.525 00:30:48.525 ' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.525 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:48.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4139095 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4139095 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 4139095 ']' 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.526 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.526 [2024-11-26 21:11:39.401983] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:30:48.526 [2024-11-26 21:11:39.402085] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139095 ] 00:30:48.784 [2024-11-26 21:11:39.471308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:48.784 [2024-11-26 21:11:39.529855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.784 [2024-11-26 21:11:39.529860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.784 21:11:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:48.784 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:48.784 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:48.784 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:48.784 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:48.784 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:48.784 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:48.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:48.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:48.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:48.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:48.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:48.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:48.785 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:48.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:48.785 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:48.785 ' 00:30:52.070 [2024-11-26 21:11:42.312870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.004 [2024-11-26 21:11:43.585498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:55.556 [2024-11-26 21:11:45.960772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:57.453 [2024-11-26 21:11:48.011421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:58.825 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:58.825 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:58.825 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.825 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.825 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:58.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:58.825 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:58.825 21:11:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.391 21:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:59.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:59.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:59.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:59.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:59.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:59.391 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:59.391 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:59.391 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:59.391 ' 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:04.654 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:04.654 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:04.654 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:04.654 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4139095 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4139095 ']' 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4139095 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4139095 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4139095' 00:31:04.912 killing process with pid 4139095 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 4139095 00:31:04.912 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 4139095 00:31:05.170 21:11:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4139095 ']' 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4139095 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4139095 ']' 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4139095 00:31:05.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4139095) - No such process 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 4139095 is not found' 00:31:05.171 Process with pid 4139095 is not found 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:05.171 00:31:05.171 real 0m16.773s 00:31:05.171 user 0m35.827s 00:31:05.171 sys 0m0.803s 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.171 21:11:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.171 ************************************ 00:31:05.171 END TEST spdkcli_nvmf_tcp 00:31:05.171 ************************************ 00:31:05.171 21:11:55 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.171 21:11:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:05.171 21:11:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.171 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:31:05.171 ************************************ 00:31:05.171 START TEST nvmf_identify_passthru 00:31:05.171 ************************************ 00:31:05.171 21:11:55 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.171 * Looking for test storage... 00:31:05.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.171 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:05.171 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:31:05.171 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.430 21:11:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:05.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.430 --rc genhtml_branch_coverage=1 00:31:05.430 --rc genhtml_function_coverage=1 00:31:05.430 --rc genhtml_legend=1 00:31:05.430 --rc geninfo_all_blocks=1 00:31:05.430 --rc geninfo_unexecuted_blocks=1 00:31:05.430 00:31:05.430 ' 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:05.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.430 --rc genhtml_branch_coverage=1 00:31:05.430 --rc genhtml_function_coverage=1 00:31:05.430 --rc genhtml_legend=1 00:31:05.430 --rc geninfo_all_blocks=1 00:31:05.430 --rc geninfo_unexecuted_blocks=1 00:31:05.430 00:31:05.430 ' 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:05.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.430 --rc genhtml_branch_coverage=1 00:31:05.430 --rc genhtml_function_coverage=1 00:31:05.430 --rc genhtml_legend=1 00:31:05.430 --rc geninfo_all_blocks=1 00:31:05.430 --rc geninfo_unexecuted_blocks=1 00:31:05.430 00:31:05.430 ' 00:31:05.430 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:05.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.430 --rc genhtml_branch_coverage=1 00:31:05.430 --rc genhtml_function_coverage=1 00:31:05.430 --rc genhtml_legend=1 00:31:05.430 --rc geninfo_all_blocks=1 00:31:05.430 --rc geninfo_unexecuted_blocks=1 00:31:05.430 00:31:05.430 ' 00:31:05.431 21:11:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.431 21:11:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:05.431 21:11:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.431 21:11:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.431 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.431 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.431 21:11:56 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.431 21:11:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.353 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:07.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:07.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:07.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:07.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.354 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:31:07.613 00:31:07.613 --- 10.0.0.2 ping statistics --- 00:31:07.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.613 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:31:07.613 00:31:07.613 --- 10.0.0.1 ping statistics --- 00:31:07.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.613 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.613 21:11:58 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.613 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:07.613 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:07.613 21:11:58 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:31:07.613 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:07.614 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:07.614 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:07.614 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:07.614 21:11:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:11.797 21:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:11.797 21:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:11.797 21:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:11.797 21:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4143839 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.981 21:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4143839 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 4143839 ']' 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.981 21:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.981 [2024-11-26 21:12:06.910355] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:31:15.982 [2024-11-26 21:12:06.910461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.240 [2024-11-26 21:12:06.985750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.240 [2024-11-26 21:12:07.044460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.240 [2024-11-26 21:12:07.044546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.240 [2024-11-26 21:12:07.044560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.240 [2024-11-26 21:12:07.044570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.240 [2024-11-26 21:12:07.044579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.240 [2024-11-26 21:12:07.046263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.240 [2024-11-26 21:12:07.046331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.240 [2024-11-26 21:12:07.046397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.240 [2024-11-26 21:12:07.046400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:16.240 21:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:16.240 INFO: Log level set to 20 00:31:16.240 INFO: Requests: 00:31:16.240 { 00:31:16.240 "jsonrpc": "2.0", 00:31:16.240 "method": "nvmf_set_config", 00:31:16.240 "id": 1, 00:31:16.240 "params": { 00:31:16.240 "admin_cmd_passthru": { 00:31:16.240 "identify_ctrlr": true 00:31:16.240 } 00:31:16.240 } 00:31:16.240 } 00:31:16.240 00:31:16.240 INFO: response: 00:31:16.240 { 00:31:16.240 "jsonrpc": "2.0", 00:31:16.240 "id": 1, 00:31:16.240 "result": true 00:31:16.240 } 00:31:16.240 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.240 21:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.240 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:16.240 INFO: Setting log level to 20 00:31:16.240 INFO: Setting log level to 20 00:31:16.240 INFO: Log level set to 20 00:31:16.240 INFO: Log level set to 20 00:31:16.240 INFO: Requests: 00:31:16.240 { 00:31:16.240 "jsonrpc": "2.0", 00:31:16.240 "method": "framework_start_init", 00:31:16.240 "id": 1 00:31:16.240 } 00:31:16.240 00:31:16.240 INFO: Requests: 00:31:16.240 { 00:31:16.240 "jsonrpc": "2.0", 00:31:16.240 "method": "framework_start_init", 00:31:16.240 "id": 1 00:31:16.240 } 00:31:16.240 00:31:16.498 [2024-11-26 21:12:07.256219] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:16.498 INFO: response: 00:31:16.498 { 00:31:16.498 "jsonrpc": "2.0", 00:31:16.498 "id": 1, 00:31:16.498 "result": true 00:31:16.498 } 00:31:16.498 00:31:16.498 INFO: response: 00:31:16.498 { 00:31:16.498 "jsonrpc": "2.0", 00:31:16.498 "id": 1, 00:31:16.498 "result": true 00:31:16.498 } 00:31:16.498 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.498 21:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:16.498 INFO: Setting log level to 40 00:31:16.498 INFO: Setting log level to 40 00:31:16.498 INFO: Setting log level to 40 00:31:16.498 [2024-11-26 21:12:07.266297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.498 21:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:16.498 21:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.498 21:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 Nvme0n1 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.784 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.784 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.784 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 [2024-11-26 21:12:10.165956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.784 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.784 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 [ 00:31:19.784 { 00:31:19.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:19.784 "subtype": "Discovery", 00:31:19.784 "listen_addresses": [], 00:31:19.784 "allow_any_host": true, 00:31:19.784 "hosts": [] 00:31:19.784 }, 00:31:19.784 { 00:31:19.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:19.784 "subtype": "NVMe", 00:31:19.784 "listen_addresses": [ 00:31:19.784 { 00:31:19.784 "trtype": "TCP", 00:31:19.784 "adrfam": "IPv4", 00:31:19.784 "traddr": "10.0.0.2", 00:31:19.784 "trsvcid": "4420" 00:31:19.784 } 00:31:19.784 ], 00:31:19.784 "allow_any_host": true, 00:31:19.784 "hosts": [], 00:31:19.784 "serial_number": "SPDK00000000000001", 00:31:19.784 "model_number": "SPDK bdev Controller", 00:31:19.784 "max_namespaces": 1, 00:31:19.785 "min_cntlid": 1, 00:31:19.785 "max_cntlid": 65519, 00:31:19.785 "namespaces": [ 00:31:19.785 { 00:31:19.785 "nsid": 1, 00:31:19.785 "bdev_name": "Nvme0n1", 00:31:19.785 "name": "Nvme0n1", 00:31:19.785 "nguid": "62056C3C952D4DFD90F1D13A00B5E23B", 00:31:19.785 "uuid": "62056c3c-952d-4dfd-90f1-d13a00b5e23b" 00:31:19.785 } 00:31:19.785 ] 00:31:19.785 } 00:31:19.785 ] 00:31:19.785 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:19.785 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:20.043 21:12:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.043 rmmod nvme_tcp 00:31:20.043 rmmod nvme_fabrics 00:31:20.043 rmmod nvme_keyring 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4143839 ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4143839 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 4143839 ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 4143839 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143839 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143839' 00:31:20.043 killing process with pid 4143839 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 4143839 00:31:20.043 21:12:10 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 4143839 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.947 21:12:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.948 21:12:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:21.948 21:12:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.855 21:12:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.855 00:31:23.855 real 0m18.480s 00:31:23.855 user 0m27.213s 00:31:23.855 sys 0m3.223s 00:31:23.855 21:12:14 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.855 21:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.855 ************************************ 00:31:23.855 END TEST nvmf_identify_passthru 00:31:23.855 ************************************ 00:31:23.855 21:12:14 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:23.855 21:12:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:23.855 21:12:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.855 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:31:23.855 ************************************ 00:31:23.855 START TEST nvmf_dif 00:31:23.855 ************************************ 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:23.855 * Looking for test storage... 00:31:23.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.855 21:12:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:23.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.855 --rc genhtml_branch_coverage=1 00:31:23.855 --rc genhtml_function_coverage=1 00:31:23.855 --rc genhtml_legend=1 00:31:23.855 --rc geninfo_all_blocks=1 00:31:23.855 --rc geninfo_unexecuted_blocks=1 00:31:23.855 00:31:23.855 ' 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:23.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.855 --rc genhtml_branch_coverage=1 00:31:23.855 --rc genhtml_function_coverage=1 00:31:23.855 --rc genhtml_legend=1 00:31:23.855 --rc geninfo_all_blocks=1 00:31:23.855 --rc geninfo_unexecuted_blocks=1 00:31:23.855 00:31:23.855 ' 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:23.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.855 --rc genhtml_branch_coverage=1 00:31:23.855 --rc genhtml_function_coverage=1 00:31:23.855 --rc genhtml_legend=1 00:31:23.855 --rc geninfo_all_blocks=1 00:31:23.855 --rc geninfo_unexecuted_blocks=1 00:31:23.855 00:31:23.855 ' 00:31:23.855 21:12:14 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:23.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.855 --rc genhtml_branch_coverage=1 00:31:23.855 --rc genhtml_function_coverage=1 00:31:23.855 --rc genhtml_legend=1 00:31:23.855 --rc geninfo_all_blocks=1 00:31:23.855 --rc geninfo_unexecuted_blocks=1 00:31:23.855 00:31:23.855 ' 00:31:23.855 21:12:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.855 21:12:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.856 21:12:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.856 21:12:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.856 21:12:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.856 21:12:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.856 21:12:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.856 21:12:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.856 21:12:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.856 21:12:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:23.856 21:12:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.856 21:12:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:23.856 21:12:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:23.856 21:12:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:23.856 21:12:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:23.856 21:12:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.856 21:12:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:23.856 21:12:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:23.856 21:12:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.856 21:12:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:25.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:25.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:25.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.763 21:12:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:25.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.764 21:12:16 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:31:26.021 00:31:26.021 --- 10.0.0.2 ping statistics --- 00:31:26.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.021 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:31:26.021 00:31:26.021 --- 10.0.0.1 ping statistics --- 00:31:26.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.021 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:26.021 21:12:16 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:26.955 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:26.955 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:26.955 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:26.955 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:26.955 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:26.955 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:26.955 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:26.955 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:26.955 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:26.955 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:26.955 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:26.955 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:26.955 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:26.955 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:26.955 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:26.955 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:26.955 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.213 21:12:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:27.213 21:12:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4147615 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:27.213 21:12:18 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4147615 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4147615 ']' 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.213 21:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.213 [2024-11-26 21:12:18.092743] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:31:27.213 [2024-11-26 21:12:18.092814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.472 [2024-11-26 21:12:18.169467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.472 [2024-11-26 21:12:18.233228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.472 [2024-11-26 21:12:18.233287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.472 [2024-11-26 21:12:18.233303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.472 [2024-11-26 21:12:18.233316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.472 [2024-11-26 21:12:18.233327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.472 [2024-11-26 21:12:18.234002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:27.472 21:12:18 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.472 21:12:18 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.472 21:12:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:27.472 21:12:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.472 [2024-11-26 21:12:18.397882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.472 21:12:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.472 21:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.732 ************************************ 00:31:27.732 START TEST fio_dif_1_default 00:31:27.732 ************************************ 00:31:27.732 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:27.732 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:27.732 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:27.732 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.732 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.733 bdev_null0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:27.733 [2024-11-26 21:12:18.458206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.733 { 00:31:27.733 "params": { 00:31:27.733 "name": "Nvme$subsystem", 00:31:27.733 "trtype": "$TEST_TRANSPORT", 00:31:27.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.733 "adrfam": "ipv4", 00:31:27.733 "trsvcid": "$NVMF_PORT", 00:31:27.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.733 "hdgst": ${hdgst:-false}, 00:31:27.733 "ddgst": ${ddgst:-false} 00:31:27.733 }, 00:31:27.733 "method": "bdev_nvme_attach_controller" 00:31:27.733 } 00:31:27.733 EOF 00:31:27.733 )") 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.733 "params": { 00:31:27.733 "name": "Nvme0", 00:31:27.733 "trtype": "tcp", 00:31:27.733 "traddr": "10.0.0.2", 00:31:27.733 "adrfam": "ipv4", 00:31:27.733 "trsvcid": "4420", 00:31:27.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.733 "hdgst": false, 00:31:27.733 "ddgst": false 00:31:27.733 }, 00:31:27.733 "method": "bdev_nvme_attach_controller" 00:31:27.733 }' 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.733 21:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.992 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:27.992 fio-3.35 00:31:27.992 Starting 1 thread 00:31:40.247 00:31:40.247 filename0: (groupid=0, jobs=1): err= 0: pid=4147845: Tue Nov 26 21:12:29 2024 00:31:40.247 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:31:40.247 slat (nsec): min=6856, max=82222, avg=8893.98, stdev=3919.55 00:31:40.247 clat (usec): min=649, max=45524, avg=20981.57, stdev=20277.11 00:31:40.247 lat (usec): min=656, max=45560, avg=20990.46, stdev=20277.20 00:31:40.247 clat percentiles (usec): 00:31:40.247 | 1.00th=[ 668], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 701], 00:31:40.247 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 824], 60.00th=[41157], 00:31:40.247 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:40.247 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:31:40.247 | 99.99th=[45351] 00:31:40.247 bw ( KiB/s): min= 704, max= 768, per=99.93%, avg=761.26, stdev=17.13, samples=19 00:31:40.247 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:31:40.247 lat (usec) : 750=44.43%, 1000=5.57% 00:31:40.247 lat (msec) : 50=50.00% 00:31:40.247 cpu : usr=90.75%, sys=8.95%, ctx=15, majf=0, minf=246 00:31:40.247 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.247 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.247 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:40.247 00:31:40.247 Run status group 0 (all jobs): 00:31:40.247 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7616KiB (7799kB), run=10001-10001msec 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.247 00:31:40.247 real 0m11.263s 00:31:40.247 user 0m10.367s 00:31:40.247 sys 0m1.176s 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.247 21:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.247 ************************************ 00:31:40.248 END TEST fio_dif_1_default 00:31:40.248 ************************************ 00:31:40.248 21:12:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:40.248 21:12:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:40.248 21:12:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 ************************************ 00:31:40.248 START TEST fio_dif_1_multi_subsystems 00:31:40.248 ************************************ 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 bdev_null0 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 [2024-11-26 21:12:29.774993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 bdev_null1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.248 { 00:31:40.248 "params": { 00:31:40.248 "name": "Nvme$subsystem", 00:31:40.248 "trtype": "$TEST_TRANSPORT", 00:31:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.248 "adrfam": "ipv4", 00:31:40.248 "trsvcid": "$NVMF_PORT", 00:31:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.248 "hdgst": ${hdgst:-false}, 00:31:40.248 "ddgst": ${ddgst:-false} 00:31:40.248 }, 00:31:40.248 "method": "bdev_nvme_attach_controller" 00:31:40.248 } 00:31:40.248 EOF 00:31:40.248 )") 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:40.248 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.249 { 00:31:40.249 "params": { 00:31:40.249 "name": "Nvme$subsystem", 00:31:40.249 "trtype": "$TEST_TRANSPORT", 00:31:40.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.249 "adrfam": "ipv4", 00:31:40.249 "trsvcid": "$NVMF_PORT", 00:31:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.249 "hdgst": ${hdgst:-false}, 00:31:40.249 "ddgst": ${ddgst:-false} 00:31:40.249 }, 00:31:40.249 "method": "bdev_nvme_attach_controller" 00:31:40.249 } 00:31:40.249 EOF 00:31:40.249 )") 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.249 "params": { 00:31:40.249 "name": "Nvme0", 00:31:40.249 "trtype": "tcp", 00:31:40.249 "traddr": "10.0.0.2", 00:31:40.249 "adrfam": "ipv4", 00:31:40.249 "trsvcid": "4420", 00:31:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.249 "hdgst": false, 00:31:40.249 "ddgst": false 00:31:40.249 }, 00:31:40.249 "method": "bdev_nvme_attach_controller" 00:31:40.249 },{ 00:31:40.249 "params": { 00:31:40.249 "name": "Nvme1", 00:31:40.249 "trtype": "tcp", 00:31:40.249 "traddr": "10.0.0.2", 00:31:40.249 "adrfam": "ipv4", 00:31:40.249 "trsvcid": "4420", 00:31:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.249 "hdgst": false, 00:31:40.249 "ddgst": false 00:31:40.249 }, 00:31:40.249 "method": "bdev_nvme_attach_controller" 00:31:40.249 }' 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.249 21:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.249 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:40.249 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:40.249 fio-3.35 00:31:40.249 Starting 2 threads 00:31:50.256 00:31:50.256 filename0: (groupid=0, jobs=1): err= 0: pid=4149256: Tue Nov 26 21:12:40 2024 00:31:50.256 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:31:50.256 slat (nsec): min=7225, max=28582, avg=9268.20, stdev=2730.59 00:31:50.256 clat (usec): min=40745, max=47565, avg=41002.24, stdev=425.48 00:31:50.256 lat (usec): min=40752, max=47592, avg=41011.50, stdev=425.80 00:31:50.256 clat percentiles (usec): 00:31:50.256 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:50.256 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:50.256 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:50.256 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:31:50.256 | 99.99th=[47449] 00:31:50.256 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:31:50.256 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:50.256 lat (msec) : 50=100.00% 00:31:50.256 cpu : usr=94.56%, sys=5.14%, ctx=16, majf=0, minf=137 00:31:50.256 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.256 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.256 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:50.256 filename1: (groupid=0, jobs=1): err= 0: pid=4149257: Tue Nov 26 21:12:40 2024 00:31:50.256 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:31:50.256 slat (nsec): min=5949, max=60756, avg=9203.85, stdev=3021.69 00:31:50.256 clat (usec): min=40784, max=47558, avg=40998.27, stdev=421.02 00:31:50.256 lat (usec): min=40791, max=47585, avg=41007.47, stdev=421.21 00:31:50.256 clat percentiles (usec): 00:31:50.256 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:50.256 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:50.256 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:50.256 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:31:50.256 | 99.99th=[47449] 00:31:50.256 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:31:50.256 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:50.256 lat (msec) : 50=100.00% 00:31:50.256 cpu : usr=94.96%, sys=4.76%, ctx=13, majf=0, minf=155 00:31:50.256 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.256 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.256 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:50.256 00:31:50.256 Run status group 0 (all jobs): 00:31:50.256 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10011-10012msec 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.256 00:31:50.256 real 0m11.418s 00:31:50.256 user 0m20.341s 00:31:50.256 sys 0m1.311s 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.256 ************************************ 00:31:50.256 END TEST fio_dif_1_multi_subsystems 00:31:50.256 ************************************ 00:31:50.256 21:12:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:50.256 21:12:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:50.256 21:12:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.256 21:12:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:50.516 ************************************ 00:31:50.516 START TEST fio_dif_rand_params 00:31:50.516 ************************************ 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.516 bdev_null0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.516 [2024-11-26 21:12:41.241055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.516 { 00:31:50.516 "params": { 00:31:50.516 "name": "Nvme$subsystem", 00:31:50.516 "trtype": "$TEST_TRANSPORT", 00:31:50.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.516 "adrfam": "ipv4", 00:31:50.516 "trsvcid": "$NVMF_PORT", 00:31:50.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.516 "hdgst": ${hdgst:-false}, 00:31:50.516 "ddgst": ${ddgst:-false} 00:31:50.516 }, 00:31:50.516 "method": "bdev_nvme_attach_controller" 00:31:50.516 } 00:31:50.516 EOF 00:31:50.516 )") 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:50.516 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:50.517 "params": { 00:31:50.517 "name": "Nvme0", 00:31:50.517 "trtype": "tcp", 00:31:50.517 "traddr": "10.0.0.2", 00:31:50.517 "adrfam": "ipv4", 00:31:50.517 "trsvcid": "4420", 00:31:50.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.517 "hdgst": false, 00:31:50.517 "ddgst": false 00:31:50.517 }, 00:31:50.517 "method": "bdev_nvme_attach_controller" 00:31:50.517 }' 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.517 21:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.775 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:50.775 ... 00:31:50.775 fio-3.35 00:31:50.775 Starting 3 threads 00:31:57.345 00:31:57.345 filename0: (groupid=0, jobs=1): err= 0: pid=4150654: Tue Nov 26 21:12:47 2024 00:31:57.345 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(123MiB/5015msec) 00:31:57.345 slat (usec): min=4, max=108, avg=15.77, stdev= 6.65 00:31:57.345 clat (usec): min=4540, max=89622, avg=15281.24, stdev=13365.87 00:31:57.345 lat (usec): min=4553, max=89640, avg=15297.01, stdev=13365.28 00:31:57.345 clat percentiles (usec): 00:31:57.345 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 7046], 20.00th=[ 8356], 00:31:57.345 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[11338], 60.00th=[11994], 00:31:57.345 | 70.00th=[12649], 80.00th=[13566], 90.00th=[47449], 95.00th=[50594], 00:31:57.345 | 99.00th=[53740], 99.50th=[55837], 99.90th=[89654], 99.95th=[89654], 00:31:57.345 | 99.99th=[89654] 00:31:57.345 bw ( KiB/s): min=17152, max=34816, per=31.42%, avg=25088.00, stdev=5501.19, samples=10 00:31:57.345 iops : min= 134, max= 272, avg=196.00, stdev=42.98, samples=10 00:31:57.345 lat (msec) : 10=35.91%, 20=51.98%, 50=6.61%, 100=5.49% 00:31:57.345 cpu : usr=82.65%, sys=10.55%, ctx=335, majf=0, minf=136 00:31:57.345 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.345 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.345 filename0: (groupid=0, jobs=1): err= 0: pid=4150655: Tue Nov 26 21:12:47 2024 00:31:57.345 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(133MiB/5004msec) 00:31:57.345 slat (nsec): min=5029, max=39875, avg=13481.37, stdev=2522.90 00:31:57.345 clat (usec): min=4607, max=90172, avg=14077.51, stdev=12673.58 00:31:57.345 lat (usec): min=4620, max=90184, avg=14091.00, stdev=12673.67 00:31:57.345 clat percentiles (usec): 00:31:57.345 | 1.00th=[ 5014], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 7701], 00:31:57.345 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[11994], 00:31:57.345 | 70.00th=[12911], 80.00th=[14091], 90.00th=[17171], 95.00th=[49021], 00:31:57.345 | 99.00th=[55313], 99.50th=[86508], 99.90th=[89654], 99.95th=[89654], 00:31:57.345 | 99.99th=[89654] 00:31:57.345 bw ( KiB/s): min=17152, max=34560, per=34.05%, avg=27187.20, stdev=5294.52, samples=10 00:31:57.345 iops : min= 134, max= 270, avg=212.40, stdev=41.36, samples=10 00:31:57.345 lat (msec) : 10=44.51%, 20=46.48%, 50=5.07%, 100=3.94% 00:31:57.345 cpu : usr=93.46%, sys=6.08%, ctx=18, majf=0, minf=121 00:31:57.345 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 issued rwts: total=1065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.345 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.345 filename0: (groupid=0, jobs=1): err= 0: pid=4150656: Tue Nov 26 21:12:47 2024 00:31:57.345 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(135MiB/5006msec) 00:31:57.345 slat (nsec): min=4672, max=41933, avg=14058.77, stdev=3551.36 00:31:57.345 clat (usec): min=4329, max=55030, avg=13884.47, stdev=12039.71 00:31:57.345 lat (usec): min=4342, max=55044, avg=13898.53, stdev=12039.59 00:31:57.345 clat percentiles (usec): 00:31:57.345 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 7701], 00:31:57.345 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10683], 60.00th=[11600], 00:31:57.345 | 70.00th=[12518], 80.00th=[13566], 90.00th=[16057], 95.00th=[50070], 00:31:57.345 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:31:57.345 | 99.99th=[54789] 00:31:57.345 bw ( KiB/s): min=21504, max=35584, per=34.53%, avg=27571.20, stdev=4152.59, samples=10 00:31:57.345 iops : min= 168, max= 278, avg=215.40, stdev=32.44, samples=10 00:31:57.345 lat (msec) : 10=43.61%, 20=46.94%, 50=4.72%, 100=4.72% 00:31:57.345 cpu : usr=93.35%, sys=6.17%, ctx=16, majf=0, minf=35 00:31:57.345 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.345 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.345 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.345 00:31:57.345 Run status group 0 (all jobs): 00:31:57.345 READ: bw=78.0MiB/s (81.8MB/s), 24.5MiB/s-27.0MiB/s (25.7MB/s-28.3MB/s), io=391MiB (410MB), run=5004-5015msec 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.345 bdev_null0 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.345 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 [2024-11-26 21:12:47.508909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 bdev_null1 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 bdev_null2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.346 { 00:31:57.346 "params": { 00:31:57.346 "name": "Nvme$subsystem", 00:31:57.346 "trtype": "$TEST_TRANSPORT", 00:31:57.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.346 "adrfam": "ipv4", 00:31:57.346 "trsvcid": "$NVMF_PORT", 00:31:57.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.346 "hdgst": ${hdgst:-false}, 00:31:57.346 "ddgst": ${ddgst:-false} 00:31:57.346 }, 00:31:57.346 "method": "bdev_nvme_attach_controller" 00:31:57.346 } 00:31:57.346 EOF 00:31:57.346 )") 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.346 { 00:31:57.346 "params": { 00:31:57.346 "name": "Nvme$subsystem", 00:31:57.346 "trtype": "$TEST_TRANSPORT", 00:31:57.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.346 "adrfam": "ipv4", 00:31:57.346 "trsvcid": "$NVMF_PORT", 00:31:57.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.346 "hdgst": ${hdgst:-false}, 00:31:57.346 "ddgst": ${ddgst:-false} 00:31:57.346 }, 00:31:57.346 "method": "bdev_nvme_attach_controller" 00:31:57.346 } 00:31:57.346 EOF 00:31:57.346 )") 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.346 { 00:31:57.346 "params": { 00:31:57.346 "name": "Nvme$subsystem", 00:31:57.346 "trtype": "$TEST_TRANSPORT", 00:31:57.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.346 "adrfam": "ipv4", 00:31:57.346 "trsvcid": "$NVMF_PORT", 00:31:57.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.346 "hdgst": ${hdgst:-false}, 00:31:57.346 "ddgst": ${ddgst:-false} 00:31:57.346 }, 00:31:57.346 "method": "bdev_nvme_attach_controller" 00:31:57.346 } 00:31:57.346 EOF 00:31:57.346 )") 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:57.346 21:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:57.346 "params": { 00:31:57.346 "name": "Nvme0", 00:31:57.346 "trtype": "tcp", 00:31:57.346 "traddr": "10.0.0.2", 00:31:57.346 "adrfam": "ipv4", 00:31:57.346 "trsvcid": "4420", 00:31:57.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.346 "hdgst": false, 00:31:57.346 "ddgst": false 00:31:57.346 }, 00:31:57.346 "method": "bdev_nvme_attach_controller" 00:31:57.346 },{ 00:31:57.346 "params": { 00:31:57.346 "name": "Nvme1", 00:31:57.346 "trtype": "tcp", 00:31:57.346 "traddr": "10.0.0.2", 00:31:57.346 "adrfam": "ipv4", 00:31:57.346 "trsvcid": "4420", 00:31:57.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:57.347 "hdgst": false, 00:31:57.347 "ddgst": false 00:31:57.347 }, 00:31:57.347 "method": "bdev_nvme_attach_controller" 00:31:57.347 },{ 00:31:57.347 "params": { 00:31:57.347 "name": "Nvme2", 00:31:57.347 "trtype": "tcp", 00:31:57.347 "traddr": "10.0.0.2", 00:31:57.347 "adrfam": "ipv4", 00:31:57.347 "trsvcid": "4420", 00:31:57.347 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:57.347 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:57.347 "hdgst": false, 00:31:57.347 "ddgst": false 00:31:57.347 }, 00:31:57.347 "method": "bdev_nvme_attach_controller" 00:31:57.347 }' 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.347 21:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.347 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.347 ... 00:31:57.347 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.347 ... 00:31:57.347 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:57.347 ... 00:31:57.347 fio-3.35 00:31:57.347 Starting 24 threads 00:32:09.561 00:32:09.561 filename0: (groupid=0, jobs=1): err= 0: pid=4151513: Tue Nov 26 21:12:58 2024 00:32:09.561 read: IOPS=68, BW=275KiB/s (282kB/s)(2800KiB/10178msec) 00:32:09.561 slat (usec): min=5, max=114, avg=55.31, stdev=28.41 00:32:09.561 clat (msec): min=67, max=448, avg=230.93, stdev=58.65 00:32:09.561 lat (msec): min=67, max=448, avg=230.99, stdev=58.66 00:32:09.561 clat percentiles (msec): 00:32:09.561 | 1.00th=[ 68], 5.00th=[ 130], 10.00th=[ 167], 20.00th=[ 178], 00:32:09.561 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 236], 60.00th=[ 247], 00:32:09.561 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 300], 95.00th=[ 317], 00:32:09.561 | 99.00th=[ 430], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:32:09.561 | 99.99th=[ 447] 00:32:09.561 bw ( KiB/s): min= 144, max= 512, per=4.48%, avg=273.60, stdev=73.39, samples=20 00:32:09.561 iops : min= 36, max= 128, avg=68.40, stdev=18.35, samples=20 00:32:09.561 lat (msec) : 100=4.57%, 250=60.86%, 500=34.57% 00:32:09.561 cpu : usr=98.32%, sys=1.15%, ctx=27, majf=0, minf=48 00:32:09.561 IO depths : 1=2.0%, 2=6.3%, 4=19.0%, 8=62.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:32:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 issued rwts: total=700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.561 filename0: (groupid=0, jobs=1): err= 0: pid=4151514: Tue Nov 26 21:12:58 2024 00:32:09.561 read: IOPS=64, BW=260KiB/s (266kB/s)(2648KiB/10197msec) 00:32:09.561 slat (nsec): min=4116, max=79904, avg=24736.82, stdev=13118.29 00:32:09.561 clat (msec): min=166, max=418, avg=245.35, stdev=49.27 00:32:09.561 lat (msec): min=166, max=418, avg=245.38, stdev=49.27 00:32:09.561 clat percentiles (msec): 00:32:09.561 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 203], 00:32:09.561 | 30.00th=[ 222], 40.00th=[ 230], 50.00th=[ 247], 60.00th=[ 255], 00:32:09.561 | 70.00th=[ 262], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 330], 00:32:09.561 | 99.00th=[ 334], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 418], 00:32:09.561 | 99.99th=[ 418] 00:32:09.561 bw ( KiB/s): min= 128, max= 384, per=4.24%, avg=258.40, stdev=61.92, samples=20 00:32:09.561 iops : min= 32, max= 96, avg=64.60, stdev=15.48, samples=20 00:32:09.561 lat (msec) : 250=55.89%, 500=44.11% 00:32:09.561 cpu : usr=98.29%, sys=1.20%, ctx=23, majf=0, minf=27 00:32:09.561 IO depths : 1=4.4%, 2=9.1%, 4=20.2%, 8=58.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.561 filename0: (groupid=0, jobs=1): err= 0: pid=4151515: Tue Nov 26 21:12:58 2024 00:32:09.561 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10184msec) 00:32:09.561 slat (usec): min=5, max=148, avg=51.59, stdev=26.78 00:32:09.561 clat (msec): min=167, max=423, avg=267.59, stdev=62.11 00:32:09.561 lat (msec): min=167, max=423, avg=267.64, stdev=62.10 00:32:09.561 clat percentiles (msec): 00:32:09.561 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 203], 00:32:09.561 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 284], 00:32:09.561 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 376], 00:32:09.561 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 422], 00:32:09.561 | 99.99th=[ 422] 00:32:09.561 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.80, stdev=61.55, samples=20 00:32:09.561 iops : min= 32, max= 96, avg=59.20, stdev=15.39, samples=20 00:32:09.561 lat (msec) : 250=40.46%, 500=59.54% 00:32:09.561 cpu : usr=97.43%, sys=1.65%, ctx=146, majf=0, minf=26 00:32:09.561 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:32:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.561 filename0: (groupid=0, jobs=1): err= 0: pid=4151516: Tue Nov 26 21:12:58 2024 00:32:09.561 read: IOPS=59, BW=238KiB/s (244kB/s)(2424KiB/10186msec) 00:32:09.561 slat (usec): min=4, max=102, avg=50.47, stdev=26.28 00:32:09.561 clat (msec): min=100, max=464, avg=268.30, stdev=63.36 00:32:09.561 lat (msec): min=100, max=464, avg=268.35, stdev=63.36 00:32:09.561 clat percentiles (msec): 00:32:09.561 | 1.00th=[ 150], 5.00th=[ 174], 10.00th=[ 174], 20.00th=[ 226], 00:32:09.561 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 268], 60.00th=[ 300], 00:32:09.561 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:32:09.561 | 99.00th=[ 388], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:32:09.561 | 99.99th=[ 464] 00:32:09.561 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.00, stdev=71.15, samples=20 00:32:09.561 iops : min= 32, max= 96, avg=59.00, stdev=17.79, samples=20 00:32:09.561 lat (msec) : 250=42.24%, 500=57.76% 00:32:09.561 cpu : usr=97.97%, sys=1.34%, ctx=34, majf=0, minf=21 00:32:09.561 IO depths : 1=3.0%, 2=9.2%, 4=25.1%, 8=53.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:32:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.561 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=4151517: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=56, BW=226KiB/s (232kB/s)(2304KiB/10175msec) 00:32:09.562 slat (usec): min=8, max=129, avg=32.04, stdev=24.80 00:32:09.562 clat (msec): min=111, max=415, avg=282.37, stdev=69.19 00:32:09.562 lat (msec): min=112, max=415, avg=282.40, stdev=69.18 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 144], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 228], 00:32:09.562 | 30.00th=[ 234], 40.00th=[ 253], 50.00th=[ 275], 60.00th=[ 321], 00:32:09.562 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 384], 00:32:09.562 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:32:09.562 | 99.99th=[ 418] 00:32:09.562 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=224.00, stdev=80.59, samples=20 00:32:09.562 iops : min= 32, max= 96, avg=56.00, stdev=20.15, samples=20 00:32:09.562 lat (msec) : 250=39.24%, 500=60.76% 00:32:09.562 cpu : usr=98.04%, sys=1.21%, ctx=28, majf=0, minf=25 00:32:09.562 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=4151518: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=59, BW=239KiB/s (244kB/s)(2432KiB/10186msec) 00:32:09.562 slat (usec): min=8, max=118, avg=53.65, stdev=24.58 00:32:09.562 clat (msec): min=100, max=504, avg=267.54, stdev=63.12 00:32:09.562 lat (msec): min=100, max=504, avg=267.59, stdev=63.11 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 171], 5.00th=[ 174], 10.00th=[ 174], 20.00th=[ 226], 00:32:09.562 | 30.00th=[ 232], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 300], 00:32:09.562 | 70.00th=[ 305], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:32:09.562 | 99.00th=[ 376], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:32:09.562 | 99.99th=[ 506] 00:32:09.562 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.80, stdev=73.89, samples=20 00:32:09.562 iops : min= 32, max= 96, avg=59.20, stdev=18.47, samples=20 00:32:09.562 lat (msec) : 250=45.39%, 500=54.28%, 750=0.33% 00:32:09.562 cpu : usr=97.34%, sys=1.61%, ctx=90, majf=0, minf=30 00:32:09.562 IO depths : 1=4.8%, 2=10.9%, 4=24.5%, 8=52.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=4151519: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10160msec) 00:32:09.562 slat (nsec): min=8273, max=96462, avg=24680.89, stdev=17333.83 00:32:09.562 clat (msec): min=84, max=363, avg=227.57, stdev=55.90 00:32:09.562 lat (msec): min=84, max=363, avg=227.60, stdev=55.90 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 85], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 176], 00:32:09.562 | 30.00th=[ 190], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 247], 00:32:09.562 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 284], 95.00th=[ 305], 00:32:09.562 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:32:09.562 | 99.99th=[ 363] 00:32:09.562 bw ( KiB/s): min= 128, max= 384, per=4.60%, avg=280.80, stdev=65.76, samples=20 00:32:09.562 iops : min= 32, max= 96, avg=70.20, stdev=16.44, samples=20 00:32:09.562 lat (msec) : 100=2.27%, 250=65.62%, 500=32.10% 00:32:09.562 cpu : usr=98.09%, sys=1.27%, ctx=161, majf=0, minf=27 00:32:09.562 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename0: (groupid=0, jobs=1): err= 0: pid=4151520: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=80, BW=322KiB/s (329kB/s)(3280KiB/10200msec) 00:32:09.562 slat (nsec): min=7963, max=95878, avg=19630.96, stdev=18317.55 00:32:09.562 clat (msec): min=67, max=359, avg=198.04, stdev=49.52 00:32:09.562 lat (msec): min=67, max=359, avg=198.06, stdev=49.51 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 68], 5.00th=[ 115], 10.00th=[ 146], 20.00th=[ 167], 00:32:09.562 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 197], 60.00th=[ 218], 00:32:09.562 | 70.00th=[ 226], 80.00th=[ 239], 90.00th=[ 259], 95.00th=[ 266], 00:32:09.562 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:32:09.562 | 99.99th=[ 359] 00:32:09.562 bw ( KiB/s): min= 224, max= 496, per=5.27%, avg=321.60, stdev=67.26, samples=20 00:32:09.562 iops : min= 56, max= 124, avg=80.40, stdev=16.82, samples=20 00:32:09.562 lat (msec) : 100=3.90%, 250=80.24%, 500=15.85% 00:32:09.562 cpu : usr=98.31%, sys=1.23%, ctx=15, majf=0, minf=40 00:32:09.562 IO depths : 1=0.6%, 2=2.7%, 4=12.2%, 8=72.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=90.4%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename1: (groupid=0, jobs=1): err= 0: pid=4151521: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10196msec) 00:32:09.562 slat (nsec): min=7685, max=71100, avg=28678.23, stdev=11423.87 00:32:09.562 clat (msec): min=121, max=440, avg=261.14, stdev=60.52 00:32:09.562 lat (msec): min=121, max=440, avg=261.17, stdev=60.52 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 123], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 203], 00:32:09.562 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 275], 00:32:09.562 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 359], 00:32:09.562 | 99.00th=[ 359], 99.50th=[ 409], 99.90th=[ 443], 99.95th=[ 443], 00:32:09.562 | 99.99th=[ 443] 00:32:09.562 bw ( KiB/s): min= 128, max= 400, per=3.99%, avg=243.20, stdev=69.57, samples=20 00:32:09.562 iops : min= 32, max= 100, avg=60.80, stdev=17.39, samples=20 00:32:09.562 lat (msec) : 250=41.35%, 500=58.65% 00:32:09.562 cpu : usr=98.28%, sys=1.32%, ctx=20, majf=0, minf=22 00:32:09.562 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename1: (groupid=0, jobs=1): err= 0: pid=4151522: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=62, BW=250KiB/s (256kB/s)(2544KiB/10184msec) 00:32:09.562 slat (usec): min=8, max=130, avg=50.13, stdev=29.82 00:32:09.562 clat (msec): min=122, max=490, avg=255.79, stdev=59.73 00:32:09.562 lat (msec): min=122, max=490, avg=255.84, stdev=59.73 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 150], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 203], 00:32:09.562 | 30.00th=[ 228], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 262], 00:32:09.562 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 338], 95.00th=[ 355], 00:32:09.562 | 99.00th=[ 447], 99.50th=[ 477], 99.90th=[ 489], 99.95th=[ 489], 00:32:09.562 | 99.99th=[ 489] 00:32:09.562 bw ( KiB/s): min= 128, max= 368, per=4.07%, avg=248.00, stdev=46.86, samples=20 00:32:09.562 iops : min= 32, max= 92, avg=62.00, stdev=11.72, samples=20 00:32:09.562 lat (msec) : 250=49.06%, 500=50.94% 00:32:09.562 cpu : usr=97.63%, sys=1.51%, ctx=54, majf=0, minf=39 00:32:09.562 IO depths : 1=2.5%, 2=7.2%, 4=20.3%, 8=59.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=92.9%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename1: (groupid=0, jobs=1): err= 0: pid=4151523: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=58, BW=233KiB/s (238kB/s)(2368KiB/10181msec) 00:32:09.562 slat (usec): min=8, max=104, avg=48.27, stdev=27.71 00:32:09.562 clat (msec): min=112, max=514, avg=274.72, stdev=74.93 00:32:09.562 lat (msec): min=112, max=514, avg=274.77, stdev=74.92 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 123], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 199], 00:32:09.562 | 30.00th=[ 232], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 321], 00:32:09.562 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 363], 95.00th=[ 376], 00:32:09.562 | 99.00th=[ 451], 99.50th=[ 460], 99.90th=[ 514], 99.95th=[ 514], 00:32:09.562 | 99.99th=[ 514] 00:32:09.562 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=230.40, stdev=73.85, samples=20 00:32:09.562 iops : min= 32, max= 96, avg=57.60, stdev=18.46, samples=20 00:32:09.562 lat (msec) : 250=45.27%, 500=54.39%, 750=0.34% 00:32:09.562 cpu : usr=97.57%, sys=1.51%, ctx=62, majf=0, minf=28 00:32:09.562 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:32:09.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.562 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.562 filename1: (groupid=0, jobs=1): err= 0: pid=4151524: Tue Nov 26 21:12:58 2024 00:32:09.562 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10191msec) 00:32:09.562 slat (nsec): min=4141, max=58359, avg=24795.94, stdev=8783.74 00:32:09.562 clat (msec): min=146, max=394, avg=261.07, stdev=61.54 00:32:09.562 lat (msec): min=146, max=394, avg=261.09, stdev=61.54 00:32:09.562 clat percentiles (msec): 00:32:09.562 | 1.00th=[ 148], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 192], 00:32:09.563 | 30.00th=[ 230], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 279], 00:32:09.563 | 70.00th=[ 305], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 359], 00:32:09.563 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:32:09.563 | 99.99th=[ 397] 00:32:09.563 bw ( KiB/s): min= 128, max= 384, per=3.99%, avg=243.20, stdev=66.80, samples=20 00:32:09.563 iops : min= 32, max= 96, avg=60.80, stdev=16.70, samples=20 00:32:09.563 lat (msec) : 250=46.47%, 500=53.53% 00:32:09.563 cpu : usr=98.43%, sys=1.10%, ctx=23, majf=0, minf=32 00:32:09.563 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename1: (groupid=0, jobs=1): err= 0: pid=4151525: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=56, BW=226KiB/s (232kB/s)(2304KiB/10173msec) 00:32:09.563 slat (nsec): min=8460, max=77271, avg=27621.01, stdev=15324.42 00:32:09.563 clat (msec): min=112, max=502, avg=281.37, stdev=73.53 00:32:09.563 lat (msec): min=112, max=502, avg=281.40, stdev=73.52 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 142], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 228], 00:32:09.563 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 300], 60.00th=[ 313], 00:32:09.563 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 401], 00:32:09.563 | 99.00th=[ 456], 99.50th=[ 460], 99.90th=[ 502], 99.95th=[ 502], 00:32:09.563 | 99.99th=[ 502] 00:32:09.563 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=224.00, stdev=80.59, samples=20 00:32:09.563 iops : min= 32, max= 96, avg=56.00, stdev=20.15, samples=20 00:32:09.563 lat (msec) : 250=41.32%, 500=58.33%, 750=0.35% 00:32:09.563 cpu : usr=98.25%, sys=1.32%, ctx=52, majf=0, minf=44 00:32:09.563 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename1: (groupid=0, jobs=1): err= 0: pid=4151526: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10185msec) 00:32:09.563 slat (nsec): min=8329, max=87599, avg=26033.68, stdev=11552.06 00:32:09.563 clat (msec): min=144, max=498, avg=260.88, stdev=61.85 00:32:09.563 lat (msec): min=144, max=498, avg=260.91, stdev=61.85 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 144], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 197], 00:32:09.563 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 249], 60.00th=[ 271], 00:32:09.563 | 70.00th=[ 305], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 359], 00:32:09.563 | 99.00th=[ 359], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:32:09.563 | 99.99th=[ 498] 00:32:09.563 bw ( KiB/s): min= 128, max= 384, per=3.99%, avg=243.20, stdev=68.20, samples=20 00:32:09.563 iops : min= 32, max= 96, avg=60.80, stdev=17.05, samples=20 00:32:09.563 lat (msec) : 250=50.00%, 500=50.00% 00:32:09.563 cpu : usr=98.10%, sys=1.29%, ctx=51, majf=0, minf=26 00:32:09.563 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename1: (groupid=0, jobs=1): err= 0: pid=4151527: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=72, BW=289KiB/s (296kB/s)(2952KiB/10200msec) 00:32:09.563 slat (usec): min=5, max=105, avg=33.48, stdev=27.81 00:32:09.563 clat (msec): min=67, max=340, avg=219.36, stdev=46.08 00:32:09.563 lat (msec): min=67, max=340, avg=219.39, stdev=46.08 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 68], 5.00th=[ 140], 10.00th=[ 169], 20.00th=[ 180], 00:32:09.563 | 30.00th=[ 203], 40.00th=[ 224], 50.00th=[ 230], 60.00th=[ 234], 00:32:09.563 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 284], 00:32:09.563 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:32:09.563 | 99.99th=[ 342] 00:32:09.563 bw ( KiB/s): min= 256, max= 512, per=4.73%, avg=288.80, stdev=64.94, samples=20 00:32:09.563 iops : min= 64, max= 128, avg=72.20, stdev=16.23, samples=20 00:32:09.563 lat (msec) : 100=4.07%, 250=71.00%, 500=24.93% 00:32:09.563 cpu : usr=98.38%, sys=1.19%, ctx=31, majf=0, minf=58 00:32:09.563 IO depths : 1=3.1%, 2=6.6%, 4=16.7%, 8=64.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename1: (groupid=0, jobs=1): err= 0: pid=4151528: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=75, BW=302KiB/s (309kB/s)(3080KiB/10200msec) 00:32:09.563 slat (nsec): min=6319, max=96899, avg=20926.57, stdev=18614.11 00:32:09.563 clat (msec): min=67, max=412, avg=211.45, stdev=56.72 00:32:09.563 lat (msec): min=67, max=412, avg=211.47, stdev=56.72 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 68], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 165], 00:32:09.563 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 222], 60.00th=[ 232], 00:32:09.563 | 70.00th=[ 236], 80.00th=[ 257], 90.00th=[ 279], 95.00th=[ 309], 00:32:09.563 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 414], 99.95th=[ 414], 00:32:09.563 | 99.99th=[ 414] 00:32:09.563 bw ( KiB/s): min= 224, max= 512, per=4.94%, avg=301.60, stdev=75.81, samples=20 00:32:09.563 iops : min= 56, max= 128, avg=75.40, stdev=18.95, samples=20 00:32:09.563 lat (msec) : 100=4.16%, 250=75.58%, 500=20.26% 00:32:09.563 cpu : usr=98.08%, sys=1.38%, ctx=33, majf=0, minf=38 00:32:09.563 IO depths : 1=4.8%, 2=9.7%, 4=20.9%, 8=56.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename2: (groupid=0, jobs=1): err= 0: pid=4151529: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=59, BW=238KiB/s (244kB/s)(2424KiB/10185msec) 00:32:09.563 slat (nsec): min=5789, max=81942, avg=28637.51, stdev=12407.50 00:32:09.563 clat (msec): min=112, max=514, avg=268.53, stdev=69.71 00:32:09.563 lat (msec): min=112, max=514, avg=268.56, stdev=69.71 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 144], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 211], 00:32:09.563 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 255], 60.00th=[ 300], 00:32:09.563 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 376], 00:32:09.563 | 99.00th=[ 418], 99.50th=[ 498], 99.90th=[ 514], 99.95th=[ 514], 00:32:09.563 | 99.99th=[ 514] 00:32:09.563 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.00, stdev=71.15, samples=20 00:32:09.563 iops : min= 32, max= 96, avg=59.00, stdev=17.79, samples=20 00:32:09.563 lat (msec) : 250=47.52%, 500=52.15%, 750=0.33% 00:32:09.563 cpu : usr=98.47%, sys=1.09%, ctx=26, majf=0, minf=25 00:32:09.563 IO depths : 1=2.8%, 2=9.1%, 4=25.1%, 8=53.5%, 16=9.6%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename2: (groupid=0, jobs=1): err= 0: pid=4151530: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=69, BW=277KiB/s (284kB/s)(2824KiB/10183msec) 00:32:09.563 slat (usec): min=8, max=138, avg=35.98, stdev=29.76 00:32:09.563 clat (msec): min=108, max=364, avg=229.55, stdev=41.75 00:32:09.563 lat (msec): min=108, max=364, avg=229.58, stdev=41.75 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 136], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 192], 00:32:09.563 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 239], 00:32:09.563 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 305], 00:32:09.563 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:32:09.563 | 99.99th=[ 363] 00:32:09.563 bw ( KiB/s): min= 240, max= 384, per=4.53%, avg=276.00, stdev=41.16, samples=20 00:32:09.563 iops : min= 60, max= 96, avg=69.00, stdev=10.29, samples=20 00:32:09.563 lat (msec) : 250=73.65%, 500=26.35% 00:32:09.563 cpu : usr=98.09%, sys=1.29%, ctx=15, majf=0, minf=39 00:32:09.563 IO depths : 1=2.8%, 2=6.1%, 4=15.9%, 8=65.4%, 16=9.8%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=91.4%, 8=3.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.563 filename2: (groupid=0, jobs=1): err= 0: pid=4151531: Tue Nov 26 21:12:58 2024 00:32:09.563 read: IOPS=69, BW=277KiB/s (284kB/s)(2824KiB/10199msec) 00:32:09.563 slat (usec): min=5, max=111, avg=41.59, stdev=30.82 00:32:09.563 clat (msec): min=82, max=439, avg=229.93, stdev=57.41 00:32:09.563 lat (msec): min=82, max=439, avg=229.97, stdev=57.41 00:32:09.563 clat percentiles (msec): 00:32:09.563 | 1.00th=[ 84], 5.00th=[ 118], 10.00th=[ 167], 20.00th=[ 182], 00:32:09.563 | 30.00th=[ 215], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 239], 00:32:09.563 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 309], 95.00th=[ 326], 00:32:09.563 | 99.00th=[ 363], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:32:09.563 | 99.99th=[ 439] 00:32:09.563 bw ( KiB/s): min= 128, max= 384, per=4.53%, avg=276.00, stdev=58.24, samples=20 00:32:09.563 iops : min= 32, max= 96, avg=69.00, stdev=14.56, samples=20 00:32:09.563 lat (msec) : 100=2.27%, 250=67.42%, 500=30.31% 00:32:09.563 cpu : usr=97.83%, sys=1.40%, ctx=52, majf=0, minf=26 00:32:09.563 IO depths : 1=1.8%, 2=5.4%, 4=16.7%, 8=65.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:32:09.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.563 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 filename2: (groupid=0, jobs=1): err= 0: pid=4151532: Tue Nov 26 21:12:58 2024 00:32:09.564 read: IOPS=62, BW=251KiB/s (257kB/s)(2560KiB/10200msec) 00:32:09.564 slat (usec): min=9, max=105, avg=29.53, stdev=13.07 00:32:09.564 clat (msec): min=67, max=363, avg=254.76, stdev=70.06 00:32:09.564 lat (msec): min=67, max=363, avg=254.79, stdev=70.06 00:32:09.564 clat percentiles (msec): 00:32:09.564 | 1.00th=[ 68], 5.00th=[ 79], 10.00th=[ 174], 20.00th=[ 203], 00:32:09.564 | 30.00th=[ 230], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 266], 00:32:09.564 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 359], 00:32:09.564 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:32:09.564 | 99.99th=[ 363] 00:32:09.564 bw ( KiB/s): min= 128, max= 512, per=4.09%, avg=249.60, stdev=86.77, samples=20 00:32:09.564 iops : min= 32, max= 128, avg=62.40, stdev=21.69, samples=20 00:32:09.564 lat (msec) : 100=5.00%, 250=40.31%, 500=54.69% 00:32:09.564 cpu : usr=96.98%, sys=1.90%, ctx=53, majf=0, minf=39 00:32:09.564 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:09.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 filename2: (groupid=0, jobs=1): err= 0: pid=4151533: Tue Nov 26 21:12:58 2024 00:32:09.564 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10180msec) 00:32:09.564 slat (usec): min=8, max=105, avg=26.70, stdev=14.07 00:32:09.564 clat (msec): min=112, max=502, avg=267.67, stdev=69.91 00:32:09.564 lat (msec): min=112, max=502, avg=267.70, stdev=69.91 00:32:09.564 clat percentiles (msec): 00:32:09.564 | 1.00th=[ 144], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 197], 00:32:09.564 | 30.00th=[ 232], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 284], 00:32:09.564 | 70.00th=[ 309], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 376], 00:32:09.564 | 99.00th=[ 409], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 502], 00:32:09.564 | 99.99th=[ 502] 00:32:09.564 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.80, stdev=72.60, samples=20 00:32:09.564 iops : min= 32, max= 96, avg=59.20, stdev=18.15, samples=20 00:32:09.564 lat (msec) : 250=42.76%, 500=56.91%, 750=0.33% 00:32:09.564 cpu : usr=98.50%, sys=1.07%, ctx=12, majf=0, minf=34 00:32:09.564 IO depths : 1=3.1%, 2=9.2%, 4=24.5%, 8=53.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:32:09.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 filename2: (groupid=0, jobs=1): err= 0: pid=4151534: Tue Nov 26 21:12:58 2024 00:32:09.564 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10172msec) 00:32:09.564 slat (usec): min=4, max=145, avg=74.39, stdev=18.93 00:32:09.564 clat (msec): min=66, max=413, avg=267.00, stdev=80.97 00:32:09.564 lat (msec): min=66, max=413, avg=267.08, stdev=80.98 00:32:09.564 clat percentiles (msec): 00:32:09.564 | 1.00th=[ 67], 5.00th=[ 91], 10.00th=[ 167], 20.00th=[ 178], 00:32:09.564 | 30.00th=[ 232], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 300], 00:32:09.564 | 70.00th=[ 317], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 397], 00:32:09.564 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:32:09.564 | 99.99th=[ 414] 00:32:09.564 bw ( KiB/s): min= 128, max= 512, per=3.88%, avg=236.80, stdev=95.38, samples=20 00:32:09.564 iops : min= 32, max= 128, avg=59.20, stdev=23.85, samples=20 00:32:09.564 lat (msec) : 100=5.26%, 250=36.84%, 500=57.89% 00:32:09.564 cpu : usr=97.31%, sys=1.62%, ctx=137, majf=0, minf=38 00:32:09.564 IO depths : 1=6.1%, 2=12.2%, 4=24.3%, 8=51.0%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:09.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 filename2: (groupid=0, jobs=1): err= 0: pid=4151535: Tue Nov 26 21:12:58 2024 00:32:09.564 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10134msec) 00:32:09.564 slat (usec): min=8, max=107, avg=56.30, stdev=26.21 00:32:09.564 clat (msec): min=166, max=524, avg=280.97, stdev=66.79 00:32:09.564 lat (msec): min=166, max=524, avg=281.02, stdev=66.79 00:32:09.564 clat percentiles (msec): 00:32:09.564 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 228], 00:32:09.564 | 30.00th=[ 234], 40.00th=[ 253], 50.00th=[ 279], 60.00th=[ 321], 00:32:09.564 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 355], 95.00th=[ 359], 00:32:09.564 | 99.00th=[ 418], 99.50th=[ 439], 99.90th=[ 527], 99.95th=[ 527], 00:32:09.564 | 99.99th=[ 527] 00:32:09.564 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=224.00, stdev=81.75, samples=20 00:32:09.564 iops : min= 32, max= 96, avg=56.00, stdev=20.44, samples=20 00:32:09.564 lat (msec) : 250=39.93%, 500=59.72%, 750=0.35% 00:32:09.564 cpu : usr=98.43%, sys=1.13%, ctx=17, majf=0, minf=35 00:32:09.564 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:09.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 filename2: (groupid=0, jobs=1): err= 0: pid=4151536: Tue Nov 26 21:12:58 2024 00:32:09.564 read: IOPS=59, BW=239KiB/s (244kB/s)(2432KiB/10191msec) 00:32:09.564 slat (usec): min=6, max=106, avg=52.25, stdev=24.98 00:32:09.564 clat (msec): min=170, max=472, avg=267.75, stdev=58.72 00:32:09.564 lat (msec): min=170, max=472, avg=267.80, stdev=58.71 00:32:09.564 clat percentiles (msec): 00:32:09.564 | 1.00th=[ 171], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 226], 00:32:09.564 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 300], 00:32:09.564 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 359], 00:32:09.564 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 472], 99.95th=[ 472], 00:32:09.564 | 99.99th=[ 472] 00:32:09.564 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.80, stdev=71.10, samples=20 00:32:09.564 iops : min= 32, max= 96, avg=59.20, stdev=17.78, samples=20 00:32:09.564 lat (msec) : 250=42.76%, 500=57.24% 00:32:09.564 cpu : usr=98.07%, sys=1.42%, ctx=36, majf=0, minf=38 00:32:09.564 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:32:09.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.564 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:09.564 00:32:09.564 Run status group 0 (all jobs): 00:32:09.564 READ: bw=6089KiB/s (6235kB/s), 226KiB/s-322KiB/s (232kB/s-329kB/s), io=60.6MiB (63.6MB), run=10134-10200msec 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:09.564 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 bdev_null0 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 [2024-11-26 21:12:59.392186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 bdev_null1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:09.565 { 00:32:09.565 "params": { 00:32:09.565 "name": "Nvme$subsystem", 00:32:09.565 "trtype": "$TEST_TRANSPORT", 00:32:09.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.565 "adrfam": "ipv4", 00:32:09.565 "trsvcid": "$NVMF_PORT", 00:32:09.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.565 "hdgst": ${hdgst:-false}, 00:32:09.565 "ddgst": ${ddgst:-false} 00:32:09.565 }, 00:32:09.565 "method": "bdev_nvme_attach_controller" 00:32:09.565 } 00:32:09.565 EOF 00:32:09.565 )") 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:09.565 { 00:32:09.565 "params": { 00:32:09.565 "name": "Nvme$subsystem", 00:32:09.565 "trtype": "$TEST_TRANSPORT", 00:32:09.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.565 "adrfam": "ipv4", 00:32:09.565 "trsvcid": "$NVMF_PORT", 00:32:09.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.565 "hdgst": ${hdgst:-false}, 00:32:09.565 "ddgst": ${ddgst:-false} 00:32:09.565 }, 00:32:09.565 "method": "bdev_nvme_attach_controller" 00:32:09.565 } 00:32:09.565 EOF 00:32:09.565 )") 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:09.565 21:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:09.565 "params": { 00:32:09.565 "name": "Nvme0", 00:32:09.565 "trtype": "tcp", 00:32:09.565 "traddr": "10.0.0.2", 00:32:09.565 "adrfam": "ipv4", 00:32:09.565 "trsvcid": "4420", 00:32:09.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:09.566 "hdgst": false, 00:32:09.566 "ddgst": false 00:32:09.566 }, 00:32:09.566 "method": "bdev_nvme_attach_controller" 00:32:09.566 },{ 00:32:09.566 "params": { 00:32:09.566 "name": "Nvme1", 00:32:09.566 "trtype": "tcp", 00:32:09.566 "traddr": "10.0.0.2", 00:32:09.566 "adrfam": "ipv4", 00:32:09.566 "trsvcid": "4420", 00:32:09.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.566 "hdgst": false, 00:32:09.566 "ddgst": false 00:32:09.566 }, 00:32:09.566 "method": "bdev_nvme_attach_controller" 00:32:09.566 }' 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:09.566 21:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.566 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:09.566 ... 00:32:09.566 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:09.566 ... 00:32:09.566 fio-3.35 00:32:09.566 Starting 4 threads 00:32:14.826 00:32:14.826 filename0: (groupid=0, jobs=1): err= 0: pid=4152924: Tue Nov 26 21:13:05 2024 00:32:14.826 read: IOPS=1840, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:32:14.826 slat (nsec): min=6942, max=60494, avg=15682.77, stdev=8303.44 00:32:14.826 clat (usec): min=784, max=7455, avg=4296.85, stdev=648.39 00:32:14.826 lat (usec): min=802, max=7474, avg=4312.54, stdev=648.18 00:32:14.826 clat percentiles (usec): 00:32:14.826 | 1.00th=[ 2671], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3982], 00:32:14.826 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:14.826 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5473], 00:32:14.826 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7177], 99.95th=[ 7373], 00:32:14.826 | 99.99th=[ 7439] 00:32:14.826 bw ( KiB/s): min=14432, max=15072, per=24.88%, avg=14737.44, stdev=186.74, samples=9 00:32:14.826 iops : min= 1804, max= 1884, avg=1842.11, stdev=23.36, samples=9 00:32:14.826 lat (usec) : 1000=0.04% 00:32:14.826 lat (msec) : 2=0.34%, 4=21.13%, 10=78.49% 00:32:14.826 cpu : usr=95.88%, sys=3.60%, ctx=10, majf=0, minf=9 00:32:14.826 IO depths : 1=0.1%, 2=7.3%, 4=64.9%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 issued rwts: total=9202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.826 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.826 filename0: (groupid=0, jobs=1): err= 0: pid=4152925: Tue Nov 26 21:13:05 2024 00:32:14.826 read: IOPS=1894, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5003msec) 00:32:14.826 slat (nsec): min=4525, max=72242, avg=14419.35, stdev=7503.53 00:32:14.826 clat (usec): min=1046, max=7441, avg=4174.40, stdev=614.41 00:32:14.826 lat (usec): min=1060, max=7450, avg=4188.82, stdev=614.80 00:32:14.826 clat percentiles (usec): 00:32:14.826 | 1.00th=[ 2606], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3785], 00:32:14.826 | 30.00th=[ 3982], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:14.826 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5211], 00:32:14.826 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7177], 99.95th=[ 7308], 00:32:14.826 | 99.99th=[ 7439] 00:32:14.826 bw ( KiB/s): min=14768, max=15808, per=25.59%, avg=15158.40, stdev=339.85, samples=10 00:32:14.826 iops : min= 1846, max= 1976, avg=1894.80, stdev=42.48, samples=10 00:32:14.826 lat (msec) : 2=0.22%, 4=30.90%, 10=68.88% 00:32:14.826 cpu : usr=93.26%, sys=5.12%, ctx=281, majf=0, minf=0 00:32:14.826 IO depths : 1=0.1%, 2=7.8%, 4=64.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 issued rwts: total=9479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.826 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.826 filename1: (groupid=0, jobs=1): err= 0: pid=4152926: Tue Nov 26 21:13:05 2024 00:32:14.826 read: IOPS=1850, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5001msec) 00:32:14.826 slat (nsec): min=4546, max=66046, avg=14741.05, stdev=8283.85 00:32:14.826 clat (usec): min=794, max=9697, avg=4276.09, stdev=645.62 00:32:14.826 lat (usec): min=809, max=9710, avg=4290.83, stdev=645.53 00:32:14.826 clat percentiles (usec): 00:32:14.826 | 1.00th=[ 2835], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3916], 00:32:14.826 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:14.826 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5538], 00:32:14.826 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7898], 00:32:14.826 | 99.99th=[ 9634] 00:32:14.826 bw ( KiB/s): min=14432, max=15024, per=24.93%, avg=14768.00, stdev=233.38, samples=9 00:32:14.826 iops : min= 1804, max= 1878, avg=1846.00, stdev=29.17, samples=9 00:32:14.826 lat (usec) : 1000=0.02% 00:32:14.826 lat (msec) : 2=0.21%, 4=25.05%, 10=74.72% 00:32:14.826 cpu : usr=95.60%, sys=3.66%, ctx=55, majf=0, minf=9 00:32:14.826 IO depths : 1=0.1%, 2=7.4%, 4=65.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 issued rwts: total=9252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.826 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.826 filename1: (groupid=0, jobs=1): err= 0: pid=4152927: Tue Nov 26 21:13:05 2024 00:32:14.826 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5002msec) 00:32:14.826 slat (nsec): min=4511, max=72202, avg=16794.61, stdev=8559.30 00:32:14.826 clat (usec): min=820, max=7960, avg=4334.47, stdev=628.70 00:32:14.826 lat (usec): min=838, max=7970, avg=4351.26, stdev=628.61 00:32:14.826 clat percentiles (usec): 00:32:14.826 | 1.00th=[ 2900], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 4015], 00:32:14.826 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:32:14.826 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5604], 00:32:14.826 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7308], 99.95th=[ 7504], 00:32:14.826 | 99.99th=[ 7963] 00:32:14.826 bw ( KiB/s): min=14256, max=14976, per=24.49%, avg=14510.22, stdev=233.85, samples=9 00:32:14.826 iops : min= 1782, max= 1872, avg=1813.78, stdev=29.23, samples=9 00:32:14.826 lat (usec) : 1000=0.05% 00:32:14.826 lat (msec) : 2=0.22%, 4=19.26%, 10=80.47% 00:32:14.826 cpu : usr=94.30%, sys=4.64%, ctx=21, majf=0, minf=0 00:32:14.826 IO depths : 1=0.1%, 2=7.3%, 4=65.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.826 issued rwts: total=9113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.826 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:14.826 00:32:14.826 Run status group 0 (all jobs): 00:32:14.826 READ: bw=57.8MiB/s (60.7MB/s), 14.2MiB/s-14.8MiB/s (14.9MB/s-15.5MB/s), io=289MiB (303MB), run=5001-5003msec 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.085 00:32:15.085 real 0m24.697s 00:32:15.085 user 4m36.651s 00:32:15.085 sys 0m6.340s 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.085 21:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.085 ************************************ 00:32:15.085 END TEST fio_dif_rand_params 00:32:15.085 ************************************ 00:32:15.085 21:13:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:15.086 21:13:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.086 21:13:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.086 21:13:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:15.086 ************************************ 00:32:15.086 START TEST fio_dif_digest 00:32:15.086 ************************************ 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.086 bdev_null0 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.086 [2024-11-26 21:13:05.996608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:15.086 21:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.086 { 00:32:15.086 "params": { 00:32:15.086 "name": "Nvme$subsystem", 00:32:15.086 "trtype": "$TEST_TRANSPORT", 00:32:15.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.086 "adrfam": "ipv4", 00:32:15.086 "trsvcid": "$NVMF_PORT", 00:32:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.086 "hdgst": ${hdgst:-false}, 00:32:15.086 "ddgst": ${ddgst:-false} 00:32:15.086 }, 00:32:15.086 "method": "bdev_nvme_attach_controller" 00:32:15.086 } 00:32:15.086 EOF 00:32:15.086 )") 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:15.086 21:13:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.086 "params": { 00:32:15.086 "name": "Nvme0", 00:32:15.086 "trtype": "tcp", 00:32:15.086 "traddr": "10.0.0.2", 00:32:15.086 "adrfam": "ipv4", 00:32:15.086 "trsvcid": "4420", 00:32:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.086 "hdgst": true, 00:32:15.086 "ddgst": true 00:32:15.086 }, 00:32:15.086 "method": "bdev_nvme_attach_controller" 00:32:15.086 }' 00:32:15.343 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.343 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:15.344 21:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.601 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:15.601 ... 00:32:15.601 fio-3.35 00:32:15.601 Starting 3 threads 00:32:27.799 00:32:27.799 filename0: (groupid=0, jobs=1): err= 0: pid=4153797: Tue Nov 26 21:13:16 2024 00:32:27.799 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10048msec) 00:32:27.799 slat (nsec): min=4264, max=82870, avg=18683.23, stdev=4760.63 00:32:27.799 clat (usec): min=7838, max=52423, avg=14767.28, stdev=1625.26 00:32:27.799 lat (usec): min=7857, max=52443, avg=14785.96, stdev=1625.40 00:32:27.799 clat percentiles (usec): 00:32:27.799 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:32:27.799 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:32:27.799 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:32:27.799 | 99.00th=[17433], 99.50th=[17957], 99.90th=[26346], 99.95th=[49021], 00:32:27.799 | 99.99th=[52167] 00:32:27.799 bw ( KiB/s): min=25088, max=27136, per=32.95%, avg=26022.40, stdev=500.24, samples=20 00:32:27.799 iops : min= 196, max= 212, avg=203.30, stdev= 3.91, samples=20 00:32:27.799 lat (msec) : 10=0.44%, 20=99.31%, 50=0.20%, 100=0.05% 00:32:27.799 cpu : usr=94.21%, sys=5.30%, ctx=26, majf=0, minf=147 00:32:27.799 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.799 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.799 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.799 filename0: (groupid=0, jobs=1): err= 0: pid=4153798: Tue Nov 26 21:13:16 2024 00:32:27.799 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(262MiB/10046msec) 00:32:27.799 slat (nsec): min=5089, max=46588, avg=19554.13, stdev=5216.07 00:32:27.799 clat (usec): min=10548, max=55076, avg=14341.53, stdev=2097.48 00:32:27.799 lat (usec): min=10564, max=55095, avg=14361.09, stdev=2097.41 00:32:27.799 clat percentiles (usec): 00:32:27.799 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:32:27.799 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:32:27.799 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:32:27.799 | 99.00th=[16909], 99.50th=[17433], 99.90th=[52167], 99.95th=[54264], 00:32:27.799 | 99.99th=[55313] 00:32:27.799 bw ( KiB/s): min=25600, max=27392, per=33.91%, avg=26777.60, stdev=487.14, samples=20 00:32:27.799 iops : min= 200, max= 214, avg=209.20, stdev= 3.81, samples=20 00:32:27.799 lat (msec) : 20=99.62%, 50=0.19%, 100=0.19% 00:32:27.799 cpu : usr=94.07%, sys=5.36%, ctx=27, majf=0, minf=140 00:32:27.799 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.800 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.800 filename0: (groupid=0, jobs=1): err= 0: pid=4153799: Tue Nov 26 21:13:16 2024 00:32:27.800 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10045msec) 00:32:27.800 slat (usec): min=4, max=111, avg=17.33, stdev= 4.82 00:32:27.800 clat (usec): min=9098, max=56194, avg=14522.55, stdev=1678.26 00:32:27.800 lat (usec): min=9125, max=56208, avg=14539.89, stdev=1678.02 00:32:27.800 clat percentiles (usec): 00:32:27.800 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:32:27.800 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:32:27.800 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:32:27.800 | 99.00th=[17433], 99.50th=[17957], 99.90th=[20055], 99.95th=[53740], 00:32:27.800 | 99.99th=[56361] 00:32:27.800 bw ( KiB/s): min=25600, max=27648, per=33.50%, avg=26457.60, stdev=540.03, samples=20 00:32:27.800 iops : min= 200, max= 216, avg=206.70, stdev= 4.22, samples=20 00:32:27.800 lat (msec) : 10=0.29%, 20=99.57%, 50=0.05%, 100=0.10% 00:32:27.800 cpu : usr=94.62%, sys=4.85%, ctx=18, majf=0, minf=191 00:32:27.800 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.800 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:27.800 00:32:27.800 Run status group 0 (all jobs): 00:32:27.800 READ: bw=77.1MiB/s (80.9MB/s), 25.3MiB/s-26.1MiB/s (26.5MB/s-27.3MB/s), io=775MiB (813MB), run=10045-10048msec 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.800 00:32:27.800 real 0m11.247s 00:32:27.800 user 0m29.652s 00:32:27.800 sys 0m1.871s 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.800 21:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.800 ************************************ 00:32:27.800 END TEST fio_dif_digest 00:32:27.800 ************************************ 00:32:27.800 21:13:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:27.800 21:13:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:27.800 rmmod nvme_tcp 00:32:27.800 rmmod nvme_fabrics 00:32:27.800 rmmod nvme_keyring 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4147615 ']' 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4147615 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4147615 ']' 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4147615 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147615 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147615' 00:32:27.800 killing process with pid 4147615 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4147615 00:32:27.800 21:13:17 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4147615 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:27.800 21:13:17 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.800 Waiting for block devices as requested 00:32:27.800 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:28.060 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:28.060 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.318 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.318 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.318 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.318 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.576 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.576 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.576 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:28.576 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.576 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.833 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.833 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.833 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.833 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:29.091 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.091 21:13:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.091 21:13:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:29.091 21:13:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.626 21:13:21 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.626 00:32:31.626 real 1m7.429s 00:32:31.626 user 6m34.995s 00:32:31.626 sys 0m17.145s 00:32:31.626 21:13:21 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.626 21:13:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:31.626 ************************************ 00:32:31.626 END TEST nvmf_dif 00:32:31.626 ************************************ 00:32:31.626 21:13:21 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:31.626 21:13:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:31.626 21:13:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.626 21:13:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.626 ************************************ 00:32:31.626 START TEST nvmf_abort_qd_sizes 00:32:31.626 ************************************ 00:32:31.626 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:31.626 * Looking for test storage... 00:32:31.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.626 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.627 --rc genhtml_branch_coverage=1 00:32:31.627 --rc genhtml_function_coverage=1 00:32:31.627 --rc genhtml_legend=1 00:32:31.627 --rc geninfo_all_blocks=1 00:32:31.627 --rc geninfo_unexecuted_blocks=1 00:32:31.627 00:32:31.627 ' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.627 --rc genhtml_branch_coverage=1 00:32:31.627 --rc genhtml_function_coverage=1 00:32:31.627 --rc genhtml_legend=1 00:32:31.627 --rc geninfo_all_blocks=1 00:32:31.627 --rc geninfo_unexecuted_blocks=1 00:32:31.627 00:32:31.627 ' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.627 --rc genhtml_branch_coverage=1 00:32:31.627 --rc genhtml_function_coverage=1 00:32:31.627 --rc genhtml_legend=1 00:32:31.627 --rc geninfo_all_blocks=1 00:32:31.627 --rc geninfo_unexecuted_blocks=1 00:32:31.627 00:32:31.627 ' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.627 --rc genhtml_branch_coverage=1 00:32:31.627 --rc genhtml_function_coverage=1 00:32:31.627 --rc genhtml_legend=1 00:32:31.627 --rc geninfo_all_blocks=1 00:32:31.627 --rc geninfo_unexecuted_blocks=1 00:32:31.627 00:32:31.627 ' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:31.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.627 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.628 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.628 21:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.628 21:13:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.536 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:33.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:33.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:33.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:33.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:32:33.537 00:32:33.537 --- 10.0.0.2 ping statistics --- 00:32:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.537 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:32:33.537 00:32:33.537 --- 10.0.0.1 ping statistics --- 00:32:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.537 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:33.537 21:13:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:34.559 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:34.559 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:34.819 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:34.819 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:35.761 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4158622 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4158622 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 4158622 ']' 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.761 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:35.761 [2024-11-26 21:13:26.677176] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:32:35.761 [2024-11-26 21:13:26.677243] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.022 [2024-11-26 21:13:26.756774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.022 [2024-11-26 21:13:26.823767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.022 [2024-11-26 21:13:26.823831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.022 [2024-11-26 21:13:26.823859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.022 [2024-11-26 21:13:26.823870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.022 [2024-11-26 21:13:26.823884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.022 [2024-11-26 21:13:26.828710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.022 [2024-11-26 21:13:26.828743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.022 [2024-11-26 21:13:26.828859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.022 [2024-11-26 21:13:26.828862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.022 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.022 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:36.022 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.022 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.022 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.282 21:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:36.282 ************************************ 00:32:36.282 START TEST spdk_target_abort 00:32:36.282 ************************************ 00:32:36.282 21:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:36.282 21:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:36.282 21:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:36.282 21:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.282 21:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 spdk_targetn1 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 [2024-11-26 21:13:29.857008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:39.570 [2024-11-26 21:13:29.905369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:39.570 21:13:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:42.855 Initializing NVMe Controllers 00:32:42.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:42.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:42.855 Initialization complete. Launching workers. 00:32:42.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11749, failed: 0 00:32:42.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1158, failed to submit 10591 00:32:42.855 success 728, unsuccessful 430, failed 0 00:32:42.855 21:13:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.855 21:13:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.140 Initializing NVMe Controllers 00:32:46.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:46.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:46.140 Initialization complete. Launching workers. 00:32:46.140 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8548, failed: 0 00:32:46.140 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7305 00:32:46.140 success 302, unsuccessful 941, failed 0 00:32:46.140 21:13:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.140 21:13:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:48.672 Initializing NVMe Controllers 00:32:48.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:48.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:48.672 Initialization complete. Launching workers. 00:32:48.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31209, failed: 0 00:32:48.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2771, failed to submit 28438 00:32:48.672 success 531, unsuccessful 2240, failed 0 00:32:48.672 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:48.672 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.672 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.930 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.930 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:48.930 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.930 21:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4158622 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 4158622 ']' 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 4158622 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.310 21:13:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4158622 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4158622' 00:32:50.310 killing process with pid 4158622 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 4158622 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 4158622 00:32:50.310 00:32:50.310 real 0m14.218s 00:32:50.310 user 0m53.678s 00:32:50.310 sys 0m2.793s 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.310 21:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:50.310 ************************************ 00:32:50.310 END TEST spdk_target_abort 00:32:50.310 ************************************ 00:32:50.571 21:13:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:50.571 21:13:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.571 21:13:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.571 21:13:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.571 ************************************ 00:32:50.571 START TEST kernel_target_abort 00:32:50.571 ************************************ 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:50.571 21:13:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:51.511 Waiting for block devices as requested 00:32:51.511 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:51.772 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:51.772 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:51.772 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.030 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.030 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.030 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:52.030 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:52.291 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:52.291 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:52.291 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:52.291 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:52.551 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:52.551 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:52.551 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:52.551 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:52.811 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:52.811 No valid GPT data, bailing 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.811 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:53.071 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:53.072 00:32:53.072 Discovery Log Number of Records 2, Generation counter 2 00:32:53.072 =====Discovery Log Entry 0====== 00:32:53.072 trtype: tcp 00:32:53.072 adrfam: ipv4 00:32:53.072 subtype: current discovery subsystem 00:32:53.072 treq: not specified, sq flow control disable supported 00:32:53.072 portid: 1 00:32:53.072 trsvcid: 4420 00:32:53.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:53.072 traddr: 10.0.0.1 00:32:53.072 eflags: none 00:32:53.072 sectype: none 00:32:53.072 =====Discovery Log Entry 1====== 00:32:53.072 trtype: tcp 00:32:53.072 adrfam: ipv4 00:32:53.072 subtype: nvme subsystem 00:32:53.072 treq: not specified, sq flow control disable supported 00:32:53.072 portid: 1 00:32:53.072 trsvcid: 4420 00:32:53.072 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:53.072 traddr: 10.0.0.1 00:32:53.072 eflags: none 00:32:53.072 sectype: none 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.072 21:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.364 Initializing NVMe Controllers 00:32:56.365 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.365 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.365 Initialization complete. Launching workers. 00:32:56.365 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40287, failed: 0 00:32:56.365 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40287, failed to submit 0 00:32:56.365 success 0, unsuccessful 40287, failed 0 00:32:56.365 21:13:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:56.365 21:13:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:59.658 Initializing NVMe Controllers 00:32:59.658 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:59.658 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:59.658 Initialization complete. Launching workers. 00:32:59.658 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75931, failed: 0 00:32:59.658 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19142, failed to submit 56789 00:32:59.658 success 0, unsuccessful 19142, failed 0 00:32:59.658 21:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:59.658 21:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:02.954 Initializing NVMe Controllers 00:33:02.954 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:02.954 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:02.954 Initialization complete. Launching workers. 00:33:02.954 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73978, failed: 0 00:33:02.954 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18474, failed to submit 55504 00:33:02.954 success 0, unsuccessful 18474, failed 0 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:02.954 21:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:03.520 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:03.520 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:03.520 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:03.780 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:04.719 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:04.720 00:33:04.720 real 0m14.250s 00:33:04.720 user 0m5.840s 00:33:04.720 sys 0m3.295s 00:33:04.720 21:13:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.720 21:13:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.720 ************************************ 00:33:04.720 END TEST kernel_target_abort 00:33:04.720 ************************************ 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.720 rmmod nvme_tcp 00:33:04.720 rmmod nvme_fabrics 00:33:04.720 rmmod nvme_keyring 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4158622 ']' 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4158622 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 4158622 ']' 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 4158622 00:33:04.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4158622) - No such process 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 4158622 is not found' 00:33:04.720 Process with pid 4158622 is not found 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:04.720 21:13:55 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:06.099 Waiting for block devices as requested 00:33:06.099 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:06.099 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:06.099 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:06.357 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:06.357 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:06.357 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:06.357 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:06.615 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:06.615 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:06.615 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:06.615 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:06.874 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:06.875 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:06.875 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:06.875 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:06.875 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:07.134 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:07.134 21:13:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.707 21:14:00 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.707 00:33:09.707 real 0m38.020s 00:33:09.707 user 1m1.751s 00:33:09.707 sys 0m9.640s 00:33:09.707 21:14:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.707 21:14:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:09.707 ************************************ 00:33:09.707 END TEST nvmf_abort_qd_sizes 00:33:09.707 ************************************ 00:33:09.707 21:14:00 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:09.707 21:14:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:09.707 21:14:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.708 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:33:09.708 ************************************ 00:33:09.708 START TEST keyring_file 00:33:09.708 ************************************ 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:09.708 * Looking for test storage... 00:33:09.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.708 --rc genhtml_branch_coverage=1 00:33:09.708 --rc genhtml_function_coverage=1 00:33:09.708 --rc genhtml_legend=1 00:33:09.708 --rc geninfo_all_blocks=1 00:33:09.708 --rc geninfo_unexecuted_blocks=1 00:33:09.708 00:33:09.708 ' 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.708 --rc genhtml_branch_coverage=1 00:33:09.708 --rc genhtml_function_coverage=1 00:33:09.708 --rc genhtml_legend=1 00:33:09.708 --rc geninfo_all_blocks=1 00:33:09.708 --rc geninfo_unexecuted_blocks=1 00:33:09.708 00:33:09.708 ' 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.708 --rc genhtml_branch_coverage=1 00:33:09.708 --rc genhtml_function_coverage=1 00:33:09.708 --rc genhtml_legend=1 00:33:09.708 --rc geninfo_all_blocks=1 00:33:09.708 --rc geninfo_unexecuted_blocks=1 00:33:09.708 00:33:09.708 ' 00:33:09.708 21:14:00 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.708 --rc genhtml_branch_coverage=1 00:33:09.708 --rc genhtml_function_coverage=1 00:33:09.708 --rc genhtml_legend=1 00:33:09.708 --rc geninfo_all_blocks=1 00:33:09.708 --rc geninfo_unexecuted_blocks=1 00:33:09.708 00:33:09.708 ' 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.708 21:14:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.708 21:14:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.708 21:14:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.708 21:14:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.708 21:14:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:09.708 21:14:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:09.708 21:14:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GVcRCfYOYZ 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:09.708 21:14:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:09.708 21:14:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GVcRCfYOYZ 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GVcRCfYOYZ 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GVcRCfYOYZ 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TafSWxL6b8 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:09.709 21:14:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TafSWxL6b8 00:33:09.709 21:14:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TafSWxL6b8 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TafSWxL6b8 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=4164394 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4164394 00:33:09.709 21:14:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4164394 ']' 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.709 21:14:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.709 [2024-11-26 21:14:00.358902] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:33:09.709 [2024-11-26 21:14:00.359009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164394 ] 00:33:09.709 [2024-11-26 21:14:00.430628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.709 [2024-11-26 21:14:00.493223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:09.969 21:14:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 [2024-11-26 21:14:00.791981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.969 null0 00:33:09.969 [2024-11-26 21:14:00.824044] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:09.969 [2024-11-26 21:14:00.824592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.969 21:14:00 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 [2024-11-26 21:14:00.852094] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:09.969 request: 00:33:09.969 { 00:33:09.969 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.969 "secure_channel": false, 00:33:09.969 "listen_address": { 00:33:09.969 "trtype": "tcp", 00:33:09.969 "traddr": "127.0.0.1", 00:33:09.969 "trsvcid": "4420" 00:33:09.969 }, 00:33:09.969 "method": "nvmf_subsystem_add_listener", 00:33:09.969 "req_id": 1 00:33:09.969 } 00:33:09.969 Got JSON-RPC error response 00:33:09.969 response: 00:33:09.969 { 00:33:09.969 "code": -32602, 00:33:09.969 "message": "Invalid parameters" 00:33:09.969 } 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:09.969 21:14:00 keyring_file -- keyring/file.sh@47 -- # bperfpid=4164401 00:33:09.969 21:14:00 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:09.969 21:14:00 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4164401 /var/tmp/bperf.sock 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4164401 ']' 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.969 21:14:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.969 [2024-11-26 21:14:00.905604] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:33:09.969 [2024-11-26 21:14:00.905692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164401 ] 00:33:10.229 [2024-11-26 21:14:00.973723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.229 [2024-11-26 21:14:01.034392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.229 21:14:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.229 21:14:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:10.229 21:14:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:10.229 21:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:10.797 21:14:01 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TafSWxL6b8 00:33:10.797 21:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TafSWxL6b8 00:33:10.797 21:14:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:10.797 21:14:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:10.797 21:14:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.797 21:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.797 21:14:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.055 21:14:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.GVcRCfYOYZ == \/\t\m\p\/\t\m\p\.\G\V\c\R\C\f\Y\O\Y\Z ]] 00:33:11.055 21:14:01 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:11.055 21:14:01 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:11.055 21:14:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.055 21:14:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.055 21:14:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.621 21:14:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TafSWxL6b8 == \/\t\m\p\/\t\m\p\.\T\a\f\S\W\x\L\6\b\8 ]] 00:33:11.621 21:14:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:11.621 21:14:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.621 21:14:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.621 21:14:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.621 21:14:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.621 21:14:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.622 21:14:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:11.622 21:14:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:11.622 21:14:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.622 21:14:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.622 21:14:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.622 21:14:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.622 21:14:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:12.193 21:14:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:12.193 21:14:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.194 21:14:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.194 [2024-11-26 21:14:03.069380] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:12.453 nvme0n1 00:33:12.453 21:14:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:12.453 21:14:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:12.453 21:14:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.453 21:14:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.453 21:14:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.453 21:14:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:12.711 21:14:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:12.711 21:14:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:12.711 21:14:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:12.711 21:14:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.711 21:14:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.711 21:14:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:12.712 21:14:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.972 21:14:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:12.972 21:14:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.972 Running I/O for 1 seconds... 00:33:13.915 7799.00 IOPS, 30.46 MiB/s 00:33:13.915 Latency(us) 00:33:13.915 [2024-11-26T20:14:04.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.915 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:13.915 nvme0n1 : 1.01 7834.89 30.61 0.00 0.00 16248.02 8204.14 26408.58 00:33:13.915 [2024-11-26T20:14:04.853Z] =================================================================================================================== 00:33:13.915 [2024-11-26T20:14:04.853Z] Total : 7834.89 30.61 0.00 0.00 16248.02 8204.14 26408.58 00:33:13.915 { 00:33:13.915 "results": [ 00:33:13.915 { 00:33:13.915 "job": "nvme0n1", 00:33:13.915 "core_mask": "0x2", 00:33:13.915 "workload": "randrw", 00:33:13.915 "percentage": 50, 00:33:13.915 "status": "finished", 00:33:13.915 "queue_depth": 128, 00:33:13.915 "io_size": 4096, 00:33:13.915 "runtime": 1.011756, 00:33:13.915 "iops": 7834.892997916494, 00:33:13.915 "mibps": 30.605050773111305, 00:33:13.915 "io_failed": 0, 00:33:13.915 "io_timeout": 0, 00:33:13.915 "avg_latency_us": 16248.022039630141, 00:33:13.915 "min_latency_us": 8204.136296296296, 00:33:13.915 "max_latency_us": 26408.58074074074 00:33:13.915 } 00:33:13.915 ], 00:33:13.915 "core_count": 1 00:33:13.915 } 00:33:14.174 21:14:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:14.174 21:14:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:14.432 21:14:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:14.432 21:14:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:14.432 21:14:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:14.432 21:14:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.432 21:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.432 21:14:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:14.689 21:14:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:14.689 21:14:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:14.689 21:14:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:14.689 21:14:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:14.689 21:14:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.689 21:14:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:14.689 21:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.947 21:14:05 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:14.947 21:14:05 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:14.947 21:14:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:14.947 21:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:15.205 [2024-11-26 21:14:05.946236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:15.205 [2024-11-26 21:14:05.946998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889530 (107): Transport endpoint is not connected 00:33:15.205 [2024-11-26 21:14:05.947982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889530 (9): Bad file descriptor 00:33:15.205 [2024-11-26 21:14:05.948989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:15.205 [2024-11-26 21:14:05.949009] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:15.205 [2024-11-26 21:14:05.949023] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:15.205 [2024-11-26 21:14:05.949053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:15.205 request: 00:33:15.205 { 00:33:15.205 "name": "nvme0", 00:33:15.205 "trtype": "tcp", 00:33:15.205 "traddr": "127.0.0.1", 00:33:15.205 "adrfam": "ipv4", 00:33:15.205 "trsvcid": "4420", 00:33:15.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.205 "prchk_reftag": false, 00:33:15.205 "prchk_guard": false, 00:33:15.205 "hdgst": false, 00:33:15.205 "ddgst": false, 00:33:15.205 "psk": "key1", 00:33:15.205 "allow_unrecognized_csi": false, 00:33:15.205 "method": "bdev_nvme_attach_controller", 00:33:15.205 "req_id": 1 00:33:15.205 } 00:33:15.205 Got JSON-RPC error response 00:33:15.205 response: 00:33:15.205 { 00:33:15.205 "code": -5, 00:33:15.205 "message": "Input/output error" 00:33:15.205 } 00:33:15.205 21:14:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:15.205 21:14:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:15.205 21:14:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:15.205 21:14:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:15.205 21:14:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:15.205 21:14:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:15.205 21:14:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.205 21:14:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.205 21:14:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.205 21:14:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.464 21:14:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:15.464 21:14:06 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:15.464 21:14:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:15.464 21:14:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.464 21:14:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.464 21:14:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.464 21:14:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:15.722 21:14:06 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:15.722 21:14:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:15.722 21:14:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:15.980 21:14:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:15.980 21:14:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:16.240 21:14:07 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:16.240 21:14:07 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:16.240 21:14:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.497 21:14:07 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:16.497 21:14:07 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.GVcRCfYOYZ 00:33:16.497 21:14:07 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:16.497 21:14:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:16.497 21:14:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:16.755 [2024-11-26 21:14:07.584734] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GVcRCfYOYZ': 0100660 00:33:16.755 [2024-11-26 21:14:07.584789] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:16.755 request: 00:33:16.755 { 00:33:16.755 "name": "key0", 00:33:16.755 "path": "/tmp/tmp.GVcRCfYOYZ", 00:33:16.755 "method": "keyring_file_add_key", 00:33:16.755 "req_id": 1 00:33:16.755 } 00:33:16.755 Got JSON-RPC error response 00:33:16.755 response: 00:33:16.755 { 00:33:16.755 "code": -1, 00:33:16.755 "message": "Operation not permitted" 00:33:16.755 } 00:33:16.755 21:14:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:16.755 21:14:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:16.755 21:14:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:16.755 21:14:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:16.755 21:14:07 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.GVcRCfYOYZ 00:33:16.755 21:14:07 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:16.755 21:14:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GVcRCfYOYZ 00:33:17.013 21:14:07 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.GVcRCfYOYZ 00:33:17.013 21:14:07 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:17.013 21:14:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:17.013 21:14:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:17.013 21:14:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:17.013 21:14:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.013 21:14:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:17.271 21:14:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:17.271 21:14:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.271 21:14:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.271 21:14:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.530 [2024-11-26 21:14:08.419045] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GVcRCfYOYZ': No such file or directory 00:33:17.530 [2024-11-26 21:14:08.419092] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:17.530 [2024-11-26 21:14:08.419132] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:17.530 [2024-11-26 21:14:08.419147] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:17.530 [2024-11-26 21:14:08.419159] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:17.530 [2024-11-26 21:14:08.419171] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:17.530 request: 00:33:17.530 { 00:33:17.530 "name": "nvme0", 00:33:17.530 "trtype": "tcp", 00:33:17.530 "traddr": "127.0.0.1", 00:33:17.530 "adrfam": "ipv4", 00:33:17.530 "trsvcid": "4420", 00:33:17.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.530 "prchk_reftag": false, 00:33:17.530 "prchk_guard": false, 00:33:17.530 "hdgst": false, 00:33:17.530 "ddgst": false, 00:33:17.530 "psk": "key0", 00:33:17.530 "allow_unrecognized_csi": false, 00:33:17.530 "method": "bdev_nvme_attach_controller", 00:33:17.530 "req_id": 1 00:33:17.530 } 00:33:17.530 Got JSON-RPC error response 00:33:17.530 response: 00:33:17.530 { 00:33:17.530 "code": -19, 00:33:17.530 "message": "No such device" 00:33:17.530 } 00:33:17.530 21:14:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:17.530 21:14:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.530 21:14:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.530 21:14:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.530 21:14:08 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:17.530 21:14:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:17.788 21:14:08 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IbHWFVCL2Y 00:33:17.788 21:14:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:17.788 21:14:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:17.789 21:14:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:17.789 21:14:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:17.789 21:14:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:17.789 21:14:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:17.789 21:14:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:18.047 21:14:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IbHWFVCL2Y 00:33:18.047 21:14:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IbHWFVCL2Y 00:33:18.047 21:14:08 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.IbHWFVCL2Y 00:33:18.047 21:14:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IbHWFVCL2Y 00:33:18.047 21:14:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IbHWFVCL2Y 00:33:18.306 21:14:09 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:18.306 21:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:18.565 nvme0n1 00:33:18.565 21:14:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:18.565 21:14:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:18.565 21:14:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.565 21:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.565 21:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.565 21:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.824 21:14:09 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:18.824 21:14:09 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:18.824 21:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:19.083 21:14:09 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:19.083 21:14:09 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:19.083 21:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.083 21:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.083 21:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.341 21:14:10 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:19.341 21:14:10 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:19.341 21:14:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.341 21:14:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.341 21:14:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.341 21:14:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.341 21:14:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.599 21:14:10 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:19.599 21:14:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:19.599 21:14:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:19.859 21:14:10 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:19.859 21:14:10 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:19.859 21:14:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.426 21:14:11 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:20.426 21:14:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IbHWFVCL2Y 00:33:20.426 21:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IbHWFVCL2Y 00:33:20.426 21:14:11 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TafSWxL6b8 00:33:20.426 21:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TafSWxL6b8 00:33:20.685 21:14:11 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:20.685 21:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:21.254 nvme0n1 00:33:21.254 21:14:11 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:21.254 21:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:21.514 21:14:12 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:21.514 "subsystems": [ 00:33:21.514 { 00:33:21.514 "subsystem": "keyring", 00:33:21.514 "config": [ 00:33:21.514 { 00:33:21.514 "method": "keyring_file_add_key", 00:33:21.514 "params": { 00:33:21.514 "name": "key0", 00:33:21.514 "path": "/tmp/tmp.IbHWFVCL2Y" 00:33:21.514 } 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "method": "keyring_file_add_key", 00:33:21.514 "params": { 00:33:21.514 "name": "key1", 00:33:21.514 "path": "/tmp/tmp.TafSWxL6b8" 00:33:21.514 } 00:33:21.514 } 00:33:21.514 ] 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "subsystem": "iobuf", 00:33:21.514 "config": [ 00:33:21.514 { 00:33:21.514 "method": "iobuf_set_options", 00:33:21.514 "params": { 00:33:21.514 "small_pool_count": 8192, 00:33:21.514 "large_pool_count": 1024, 00:33:21.514 "small_bufsize": 8192, 00:33:21.514 "large_bufsize": 135168, 00:33:21.514 "enable_numa": false 00:33:21.514 } 00:33:21.514 } 00:33:21.514 ] 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "subsystem": "sock", 00:33:21.514 "config": [ 00:33:21.514 { 00:33:21.514 "method": "sock_set_default_impl", 00:33:21.514 "params": { 00:33:21.514 "impl_name": "posix" 00:33:21.514 } 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "method": "sock_impl_set_options", 00:33:21.514 "params": { 00:33:21.514 "impl_name": "ssl", 00:33:21.514 "recv_buf_size": 4096, 00:33:21.514 "send_buf_size": 4096, 00:33:21.514 "enable_recv_pipe": true, 00:33:21.514 "enable_quickack": false, 00:33:21.514 "enable_placement_id": 0, 00:33:21.514 "enable_zerocopy_send_server": true, 00:33:21.514 "enable_zerocopy_send_client": false, 00:33:21.514 "zerocopy_threshold": 0, 00:33:21.514 "tls_version": 0, 00:33:21.514 "enable_ktls": false 00:33:21.514 } 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "method": "sock_impl_set_options", 00:33:21.514 "params": { 00:33:21.514 "impl_name": "posix", 00:33:21.514 "recv_buf_size": 2097152, 00:33:21.514 "send_buf_size": 2097152, 00:33:21.514 "enable_recv_pipe": true, 00:33:21.514 "enable_quickack": false, 00:33:21.514 "enable_placement_id": 0, 00:33:21.514 "enable_zerocopy_send_server": true, 00:33:21.514 "enable_zerocopy_send_client": false, 00:33:21.514 "zerocopy_threshold": 0, 00:33:21.514 "tls_version": 0, 00:33:21.514 "enable_ktls": false 00:33:21.514 } 00:33:21.514 } 00:33:21.514 ] 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "subsystem": "vmd", 00:33:21.514 "config": [] 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "subsystem": "accel", 00:33:21.514 "config": [ 00:33:21.514 { 00:33:21.514 "method": "accel_set_options", 00:33:21.514 "params": { 00:33:21.514 "small_cache_size": 128, 00:33:21.514 "large_cache_size": 16, 00:33:21.514 "task_count": 2048, 00:33:21.514 "sequence_count": 2048, 00:33:21.514 "buf_count": 2048 00:33:21.514 } 00:33:21.514 } 00:33:21.514 ] 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "subsystem": "bdev", 00:33:21.514 "config": [ 00:33:21.514 { 00:33:21.514 "method": "bdev_set_options", 00:33:21.514 "params": { 00:33:21.514 "bdev_io_pool_size": 65535, 00:33:21.514 "bdev_io_cache_size": 256, 00:33:21.514 "bdev_auto_examine": true, 00:33:21.514 "iobuf_small_cache_size": 128, 00:33:21.514 "iobuf_large_cache_size": 16 00:33:21.514 } 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "method": "bdev_raid_set_options", 00:33:21.514 "params": { 00:33:21.514 "process_window_size_kb": 1024, 00:33:21.514 "process_max_bandwidth_mb_sec": 0 00:33:21.514 } 00:33:21.514 }, 00:33:21.514 { 00:33:21.514 "method": "bdev_iscsi_set_options", 00:33:21.514 "params": { 00:33:21.515 "timeout_sec": 30 00:33:21.515 } 00:33:21.515 }, 00:33:21.515 { 00:33:21.515 "method": "bdev_nvme_set_options", 00:33:21.515 "params": { 00:33:21.515 "action_on_timeout": "none", 00:33:21.515 "timeout_us": 0, 00:33:21.515 "timeout_admin_us": 0, 00:33:21.515 "keep_alive_timeout_ms": 10000, 00:33:21.515 "arbitration_burst": 0, 00:33:21.515 "low_priority_weight": 0, 00:33:21.515 "medium_priority_weight": 0, 00:33:21.515 "high_priority_weight": 0, 00:33:21.515 "nvme_adminq_poll_period_us": 10000, 00:33:21.515 "nvme_ioq_poll_period_us": 0, 00:33:21.515 "io_queue_requests": 512, 00:33:21.515 "delay_cmd_submit": true, 00:33:21.515 "transport_retry_count": 4, 00:33:21.515 "bdev_retry_count": 3, 00:33:21.515 "transport_ack_timeout": 0, 00:33:21.515 "ctrlr_loss_timeout_sec": 0, 00:33:21.515 "reconnect_delay_sec": 0, 00:33:21.515 "fast_io_fail_timeout_sec": 0, 00:33:21.515 "disable_auto_failback": false, 00:33:21.515 "generate_uuids": false, 00:33:21.515 "transport_tos": 0, 00:33:21.515 "nvme_error_stat": false, 00:33:21.515 "rdma_srq_size": 0, 00:33:21.515 "io_path_stat": false, 00:33:21.515 "allow_accel_sequence": false, 00:33:21.515 "rdma_max_cq_size": 0, 00:33:21.515 "rdma_cm_event_timeout_ms": 0, 00:33:21.515 "dhchap_digests": [ 00:33:21.515 "sha256", 00:33:21.515 "sha384", 00:33:21.515 "sha512" 00:33:21.515 ], 00:33:21.515 "dhchap_dhgroups": [ 00:33:21.515 "null", 00:33:21.515 "ffdhe2048", 00:33:21.515 "ffdhe3072", 00:33:21.515 "ffdhe4096", 00:33:21.515 "ffdhe6144", 00:33:21.515 "ffdhe8192" 00:33:21.515 ] 00:33:21.515 } 00:33:21.515 }, 00:33:21.515 { 00:33:21.515 "method": "bdev_nvme_attach_controller", 00:33:21.515 "params": { 00:33:21.515 "name": "nvme0", 00:33:21.515 "trtype": "TCP", 00:33:21.515 "adrfam": "IPv4", 00:33:21.515 "traddr": "127.0.0.1", 00:33:21.515 "trsvcid": "4420", 00:33:21.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.515 "prchk_reftag": false, 00:33:21.515 "prchk_guard": false, 00:33:21.515 "ctrlr_loss_timeout_sec": 0, 00:33:21.515 "reconnect_delay_sec": 0, 00:33:21.515 "fast_io_fail_timeout_sec": 0, 00:33:21.515 "psk": "key0", 00:33:21.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.515 "hdgst": false, 00:33:21.515 "ddgst": false, 00:33:21.515 "multipath": "multipath" 00:33:21.515 } 00:33:21.515 }, 00:33:21.515 { 00:33:21.515 "method": "bdev_nvme_set_hotplug", 00:33:21.515 "params": { 00:33:21.515 "period_us": 100000, 00:33:21.515 "enable": false 00:33:21.515 } 00:33:21.515 }, 00:33:21.515 { 00:33:21.515 "method": "bdev_wait_for_examine" 00:33:21.515 } 00:33:21.515 ] 00:33:21.515 }, 00:33:21.515 { 00:33:21.515 "subsystem": "nbd", 00:33:21.515 "config": [] 00:33:21.515 } 00:33:21.515 ] 00:33:21.515 }' 00:33:21.515 21:14:12 keyring_file -- keyring/file.sh@115 -- # killprocess 4164401 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4164401 ']' 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4164401 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4164401 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4164401' 00:33:21.515 killing process with pid 4164401 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@973 -- # kill 4164401 00:33:21.515 Received shutdown signal, test time was about 1.000000 seconds 00:33:21.515 00:33:21.515 Latency(us) 00:33:21.515 [2024-11-26T20:14:12.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.515 [2024-11-26T20:14:12.453Z] =================================================================================================================== 00:33:21.515 [2024-11-26T20:14:12.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.515 21:14:12 keyring_file -- common/autotest_common.sh@978 -- # wait 4164401 00:33:21.775 21:14:12 keyring_file -- keyring/file.sh@118 -- # bperfpid=4165915 00:33:21.775 21:14:12 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4165915 /var/tmp/bperf.sock 00:33:21.775 21:14:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4165915 ']' 00:33:21.775 21:14:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.775 21:14:12 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:21.775 21:14:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.775 21:14:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.775 21:14:12 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:21.775 "subsystems": [ 00:33:21.775 { 00:33:21.775 "subsystem": "keyring", 00:33:21.775 "config": [ 00:33:21.775 { 00:33:21.775 "method": "keyring_file_add_key", 00:33:21.775 "params": { 00:33:21.775 "name": "key0", 00:33:21.775 "path": "/tmp/tmp.IbHWFVCL2Y" 00:33:21.775 } 00:33:21.775 }, 00:33:21.775 { 00:33:21.775 "method": "keyring_file_add_key", 00:33:21.775 "params": { 00:33:21.775 "name": "key1", 00:33:21.775 "path": "/tmp/tmp.TafSWxL6b8" 00:33:21.775 } 00:33:21.775 } 00:33:21.775 ] 00:33:21.775 }, 00:33:21.775 { 00:33:21.775 "subsystem": "iobuf", 00:33:21.775 "config": [ 00:33:21.775 { 00:33:21.775 "method": "iobuf_set_options", 00:33:21.775 "params": { 00:33:21.775 "small_pool_count": 8192, 00:33:21.775 "large_pool_count": 1024, 00:33:21.775 "small_bufsize": 8192, 00:33:21.775 "large_bufsize": 135168, 00:33:21.775 "enable_numa": false 00:33:21.775 } 00:33:21.775 } 00:33:21.775 ] 00:33:21.775 }, 00:33:21.775 { 00:33:21.775 "subsystem": "sock", 00:33:21.775 "config": [ 00:33:21.775 { 00:33:21.775 "method": "sock_set_default_impl", 00:33:21.775 "params": { 00:33:21.775 "impl_name": "posix" 00:33:21.775 } 00:33:21.775 }, 00:33:21.775 { 00:33:21.775 "method": "sock_impl_set_options", 00:33:21.775 "params": { 00:33:21.775 "impl_name": "ssl", 00:33:21.775 "recv_buf_size": 4096, 00:33:21.775 "send_buf_size": 4096, 00:33:21.775 "enable_recv_pipe": true, 00:33:21.775 "enable_quickack": false, 00:33:21.775 "enable_placement_id": 0, 00:33:21.775 "enable_zerocopy_send_server": true, 00:33:21.775 "enable_zerocopy_send_client": false, 00:33:21.775 "zerocopy_threshold": 0, 00:33:21.775 "tls_version": 0, 00:33:21.775 "enable_ktls": false 00:33:21.775 } 00:33:21.775 }, 00:33:21.775 { 00:33:21.775 "method": "sock_impl_set_options", 00:33:21.775 "params": { 00:33:21.775 "impl_name": "posix", 00:33:21.775 "recv_buf_size": 2097152, 00:33:21.775 "send_buf_size": 2097152, 00:33:21.775 "enable_recv_pipe": true, 00:33:21.775 "enable_quickack": false, 00:33:21.775 "enable_placement_id": 0, 00:33:21.775 "enable_zerocopy_send_server": true, 00:33:21.775 "enable_zerocopy_send_client": false, 00:33:21.775 "zerocopy_threshold": 0, 00:33:21.775 "tls_version": 0, 00:33:21.775 "enable_ktls": false 00:33:21.775 } 00:33:21.775 } 00:33:21.775 ] 00:33:21.775 }, 00:33:21.775 { 00:33:21.776 "subsystem": "vmd", 00:33:21.776 "config": [] 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "subsystem": "accel", 00:33:21.776 "config": [ 00:33:21.776 { 00:33:21.776 "method": "accel_set_options", 00:33:21.776 "params": { 00:33:21.776 "small_cache_size": 128, 00:33:21.776 "large_cache_size": 16, 00:33:21.776 "task_count": 2048, 00:33:21.776 "sequence_count": 2048, 00:33:21.776 "buf_count": 2048 00:33:21.776 } 00:33:21.776 } 00:33:21.776 ] 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "subsystem": "bdev", 00:33:21.776 "config": [ 00:33:21.776 { 00:33:21.776 "method": "bdev_set_options", 00:33:21.776 "params": { 00:33:21.776 "bdev_io_pool_size": 65535, 00:33:21.776 "bdev_io_cache_size": 256, 00:33:21.776 "bdev_auto_examine": true, 00:33:21.776 "iobuf_small_cache_size": 128, 00:33:21.776 "iobuf_large_cache_size": 16 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_raid_set_options", 00:33:21.776 "params": { 00:33:21.776 "process_window_size_kb": 1024, 00:33:21.776 "process_max_bandwidth_mb_sec": 0 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_iscsi_set_options", 00:33:21.776 "params": { 00:33:21.776 "timeout_sec": 30 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_nvme_set_options", 00:33:21.776 "params": { 00:33:21.776 "action_on_timeout": "none", 00:33:21.776 "timeout_us": 0, 00:33:21.776 "timeout_admin_us": 0, 00:33:21.776 "keep_alive_timeout_ms": 10000, 00:33:21.776 "arbitration_burst": 0, 00:33:21.776 "low_priority_weight": 0, 00:33:21.776 "medium_priority_weight": 0, 00:33:21.776 "high_priority_weight": 0, 00:33:21.776 "nvme_adminq_poll_period_us": 10000, 00:33:21.776 "nvme_ioq_poll_period_us": 0, 00:33:21.776 "io_queue_requests": 512, 00:33:21.776 "delay_cmd_submit": true, 00:33:21.776 "transport_retry_count": 4, 00:33:21.776 "bdev_retry_count": 3, 00:33:21.776 "transport_ack_timeout": 0, 00:33:21.776 "ctrlr_loss_timeout_sec": 0, 00:33:21.776 "reconnect_delay_sec": 0, 00:33:21.776 "fast_io_fail_timeout_sec": 0, 00:33:21.776 "disable_auto_failback": false, 00:33:21.776 "generate_uuids": false, 00:33:21.776 "transport_tos": 0, 00:33:21.776 "nvme_error_stat": false, 00:33:21.776 "rdma_srq_size": 0, 00:33:21.776 "io_path_stat": false, 00:33:21.776 "allow_accel_sequence": false, 00:33:21.776 "rdma_max_cq_size": 0, 00:33:21.776 "rdma_cm_event_timeout_ms": 0, 00:33:21.776 "dhchap_digests": [ 00:33:21.776 "sha256", 00:33:21.776 "sha384", 00:33:21.776 "sha512" 00:33:21.776 ], 00:33:21.776 "dhchap_dhgroups": [ 00:33:21.776 "null", 00:33:21.776 "ffdhe2048", 00:33:21.776 "ffdhe3072", 00:33:21.776 "ffdhe4096", 00:33:21.776 "ffdhe6144", 00:33:21.776 "ffdhe8192" 00:33:21.776 ] 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_nvme_attach_controller", 00:33:21.776 "params": { 00:33:21.776 "name": "nvme0", 00:33:21.776 "trtype": "TCP", 00:33:21.776 "adrfam": "IPv4", 00:33:21.776 "traddr": "127.0.0.1", 00:33:21.776 "trsvcid": "4420", 00:33:21.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.776 "prchk_reftag": false, 00:33:21.776 "prchk_guard": false, 00:33:21.776 "ctrlr_loss_timeout_sec": 0, 00:33:21.776 "reconnect_delay_sec": 0, 00:33:21.776 "fast_io_fail_timeout_sec": 0, 00:33:21.776 "psk": "key0", 00:33:21.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.776 "hdgst": false, 00:33:21.776 "ddgst": false, 00:33:21.776 "multipath": "multipath" 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_nvme_set_hotplug", 00:33:21.776 "params": { 00:33:21.776 "period_us": 100000, 00:33:21.776 "enable": false 00:33:21.776 } 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "method": "bdev_wait_for_examine" 00:33:21.776 } 00:33:21.776 ] 00:33:21.776 }, 00:33:21.776 { 00:33:21.776 "subsystem": "nbd", 00:33:21.776 "config": [] 00:33:21.776 } 00:33:21.776 ] 00:33:21.776 }' 00:33:21.776 21:14:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.776 21:14:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:21.776 [2024-11-26 21:14:12.601654] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:33:21.776 [2024-11-26 21:14:12.601759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165915 ] 00:33:21.776 [2024-11-26 21:14:12.671197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.036 [2024-11-26 21:14:12.736208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.036 [2024-11-26 21:14:12.931935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:22.295 21:14:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.295 21:14:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:22.295 21:14:13 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:22.295 21:14:13 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:22.295 21:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.554 21:14:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:22.554 21:14:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:22.554 21:14:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.554 21:14:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.554 21:14:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.554 21:14:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:22.554 21:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.813 21:14:13 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:22.813 21:14:13 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:22.813 21:14:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:22.813 21:14:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.813 21:14:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.813 21:14:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:22.813 21:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.072 21:14:13 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:23.072 21:14:13 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:23.072 21:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:23.072 21:14:13 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:23.332 21:14:14 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:23.332 21:14:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:23.332 21:14:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IbHWFVCL2Y /tmp/tmp.TafSWxL6b8 00:33:23.332 21:14:14 keyring_file -- keyring/file.sh@20 -- # killprocess 4165915 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4165915 ']' 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4165915 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4165915 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4165915' 00:33:23.332 killing process with pid 4165915 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@973 -- # kill 4165915 00:33:23.332 Received shutdown signal, test time was about 1.000000 seconds 00:33:23.332 00:33:23.332 Latency(us) 00:33:23.332 [2024-11-26T20:14:14.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.332 [2024-11-26T20:14:14.270Z] =================================================================================================================== 00:33:23.332 [2024-11-26T20:14:14.270Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:23.332 21:14:14 keyring_file -- common/autotest_common.sh@978 -- # wait 4165915 00:33:23.592 21:14:14 keyring_file -- keyring/file.sh@21 -- # killprocess 4164394 00:33:23.592 21:14:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4164394 ']' 00:33:23.592 21:14:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4164394 00:33:23.592 21:14:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4164394 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4164394' 00:33:23.593 killing process with pid 4164394 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@973 -- # kill 4164394 00:33:23.593 21:14:14 keyring_file -- common/autotest_common.sh@978 -- # wait 4164394 00:33:24.161 00:33:24.161 real 0m14.816s 00:33:24.161 user 0m37.237s 00:33:24.161 sys 0m3.393s 00:33:24.161 21:14:14 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.161 21:14:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:24.161 ************************************ 00:33:24.161 END TEST keyring_file 00:33:24.161 ************************************ 00:33:24.161 21:14:14 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:33:24.161 21:14:14 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:24.161 21:14:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:24.161 21:14:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.161 21:14:14 -- common/autotest_common.sh@10 -- # set +x 00:33:24.161 ************************************ 00:33:24.161 START TEST keyring_linux 00:33:24.161 ************************************ 00:33:24.161 21:14:14 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:24.161 Joined session keyring: 748430368 00:33:24.161 * Looking for test storage... 00:33:24.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:24.161 21:14:14 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:24.161 21:14:14 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:33:24.161 21:14:14 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:24.161 21:14:15 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.161 21:14:15 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:24.161 21:14:15 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.161 21:14:15 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.161 --rc genhtml_branch_coverage=1 00:33:24.161 --rc genhtml_function_coverage=1 00:33:24.161 --rc genhtml_legend=1 00:33:24.161 --rc geninfo_all_blocks=1 00:33:24.161 --rc geninfo_unexecuted_blocks=1 00:33:24.161 00:33:24.161 ' 00:33:24.161 21:14:15 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.161 --rc genhtml_branch_coverage=1 00:33:24.161 --rc genhtml_function_coverage=1 00:33:24.161 --rc genhtml_legend=1 00:33:24.161 --rc geninfo_all_blocks=1 00:33:24.161 --rc geninfo_unexecuted_blocks=1 00:33:24.161 00:33:24.161 ' 00:33:24.161 21:14:15 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.161 --rc genhtml_branch_coverage=1 00:33:24.161 --rc genhtml_function_coverage=1 00:33:24.161 --rc genhtml_legend=1 00:33:24.161 --rc geninfo_all_blocks=1 00:33:24.162 --rc geninfo_unexecuted_blocks=1 00:33:24.162 00:33:24.162 ' 00:33:24.162 21:14:15 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:24.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.162 --rc genhtml_branch_coverage=1 00:33:24.162 --rc genhtml_function_coverage=1 00:33:24.162 --rc genhtml_legend=1 00:33:24.162 --rc geninfo_all_blocks=1 00:33:24.162 --rc geninfo_unexecuted_blocks=1 00:33:24.162 00:33:24.162 ' 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.162 21:14:15 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.162 21:14:15 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.162 21:14:15 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.162 21:14:15 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.162 21:14:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.162 21:14:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.162 21:14:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.162 21:14:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:24.162 21:14:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:24.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:24.162 21:14:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:24.162 21:14:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:24.162 21:14:15 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:24.421 /tmp/:spdk-test:key0 00:33:24.421 21:14:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:24.421 21:14:15 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:24.421 21:14:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:24.421 /tmp/:spdk-test:key1 00:33:24.421 21:14:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4166360 00:33:24.421 21:14:15 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:24.421 21:14:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4166360 00:33:24.421 21:14:15 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4166360 ']' 00:33:24.421 21:14:15 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.421 21:14:15 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.421 21:14:15 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.422 21:14:15 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.422 21:14:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:24.422 [2024-11-26 21:14:15.204500] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:33:24.422 [2024-11-26 21:14:15.204603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166360 ] 00:33:24.422 [2024-11-26 21:14:15.270199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.422 [2024-11-26 21:14:15.328360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.681 21:14:15 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.681 21:14:15 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:24.681 21:14:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:24.681 21:14:15 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.681 21:14:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:24.681 [2024-11-26 21:14:15.612186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.940 null0 00:33:24.940 [2024-11-26 21:14:15.644245] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:24.940 [2024-11-26 21:14:15.644808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.940 21:14:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:24.940 309027153 00:33:24.940 21:14:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:24.940 296401003 00:33:24.940 21:14:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4166367 00:33:24.940 21:14:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:24.940 21:14:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4166367 /var/tmp/bperf.sock 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4166367 ']' 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.940 21:14:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:24.940 [2024-11-26 21:14:15.714045] Starting SPDK v25.01-pre git sha1 e43b3b914 / DPDK 24.03.0 initialization... 00:33:24.940 [2024-11-26 21:14:15.714122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166367 ] 00:33:24.940 [2024-11-26 21:14:15.783740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.940 [2024-11-26 21:14:15.845899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.199 21:14:15 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.199 21:14:15 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:25.199 21:14:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:25.199 21:14:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:25.458 21:14:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:25.458 21:14:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:25.716 21:14:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:25.716 21:14:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:25.975 [2024-11-26 21:14:16.831248] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:25.975 nvme0n1 00:33:26.234 21:14:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:26.234 21:14:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:26.234 21:14:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:26.234 21:14:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:26.234 21:14:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.234 21:14:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:26.492 21:14:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:26.492 21:14:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:26.492 21:14:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:26.492 21:14:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:26.492 21:14:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.492 21:14:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.492 21:14:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@25 -- # sn=309027153 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 309027153 == \3\0\9\0\2\7\1\5\3 ]] 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 309027153 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:26.752 21:14:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.752 Running I/O for 1 seconds... 00:33:27.690 6699.00 IOPS, 26.17 MiB/s 00:33:27.690 Latency(us) 00:33:27.690 [2024-11-26T20:14:18.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.690 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:27.690 nvme0n1 : 1.01 6731.55 26.30 0.00 0.00 18892.49 9320.68 29321.29 00:33:27.690 [2024-11-26T20:14:18.628Z] =================================================================================================================== 00:33:27.690 [2024-11-26T20:14:18.628Z] Total : 6731.55 26.30 0.00 0.00 18892.49 9320.68 29321.29 00:33:27.690 { 00:33:27.690 "results": [ 00:33:27.690 { 00:33:27.690 "job": "nvme0n1", 00:33:27.690 "core_mask": "0x2", 00:33:27.690 "workload": "randread", 00:33:27.690 "status": "finished", 00:33:27.690 "queue_depth": 128, 00:33:27.690 "io_size": 4096, 00:33:27.690 "runtime": 1.014328, 00:33:27.690 "iops": 6731.550346633436, 00:33:27.690 "mibps": 26.29511854153686, 00:33:27.690 "io_failed": 0, 00:33:27.690 "io_timeout": 0, 00:33:27.690 "avg_latency_us": 18892.49139360802, 00:33:27.690 "min_latency_us": 9320.675555555556, 00:33:27.690 "max_latency_us": 29321.291851851853 00:33:27.690 } 00:33:27.690 ], 00:33:27.690 "core_count": 1 00:33:27.690 } 00:33:27.690 21:14:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:27.690 21:14:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:28.258 21:14:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:28.258 21:14:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:28.258 21:14:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:28.258 21:14:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:28.258 21:14:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.258 21:14:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:28.258 21:14:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:28.258 21:14:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:28.258 21:14:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:28.258 21:14:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.258 21:14:19 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:28.258 21:14:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:28.515 [2024-11-26 21:14:19.422560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:28.515 [2024-11-26 21:14:19.423108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154f2e0 (107): Transport endpoint is not connected 00:33:28.515 [2024-11-26 21:14:19.424100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154f2e0 (9): Bad file descriptor 00:33:28.515 [2024-11-26 21:14:19.425099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:28.515 [2024-11-26 21:14:19.425122] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:28.515 [2024-11-26 21:14:19.425138] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:28.515 [2024-11-26 21:14:19.425155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:28.515 request: 00:33:28.515 { 00:33:28.515 "name": "nvme0", 00:33:28.515 "trtype": "tcp", 00:33:28.515 "traddr": "127.0.0.1", 00:33:28.515 "adrfam": "ipv4", 00:33:28.515 "trsvcid": "4420", 00:33:28.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:28.515 "prchk_reftag": false, 00:33:28.515 "prchk_guard": false, 00:33:28.515 "hdgst": false, 00:33:28.515 "ddgst": false, 00:33:28.515 "psk": ":spdk-test:key1", 00:33:28.516 "allow_unrecognized_csi": false, 00:33:28.516 "method": "bdev_nvme_attach_controller", 00:33:28.516 "req_id": 1 00:33:28.516 } 00:33:28.516 Got JSON-RPC error response 00:33:28.516 response: 00:33:28.516 { 00:33:28.516 "code": -5, 00:33:28.516 "message": "Input/output error" 00:33:28.516 } 00:33:28.516 21:14:19 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:28.516 21:14:19 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:28.516 21:14:19 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:28.516 21:14:19 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@33 -- # sn=309027153 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 309027153 00:33:28.516 1 links removed 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@33 -- # sn=296401003 00:33:28.516 21:14:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 296401003 00:33:28.516 1 links removed 00:33:28.774 21:14:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4166367 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4166367 ']' 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4166367 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166367 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166367' 00:33:28.774 killing process with pid 4166367 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@973 -- # kill 4166367 00:33:28.774 Received shutdown signal, test time was about 1.000000 seconds 00:33:28.774 00:33:28.774 Latency(us) 00:33:28.774 [2024-11-26T20:14:19.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.774 [2024-11-26T20:14:19.712Z] =================================================================================================================== 00:33:28.774 [2024-11-26T20:14:19.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@978 -- # wait 4166367 00:33:28.774 21:14:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4166360 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4166360 ']' 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4166360 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.774 21:14:19 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166360 00:33:29.033 21:14:19 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.033 21:14:19 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.033 21:14:19 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166360' 00:33:29.033 killing process with pid 4166360 00:33:29.033 21:14:19 keyring_linux -- common/autotest_common.sh@973 -- # kill 4166360 00:33:29.033 21:14:19 keyring_linux -- common/autotest_common.sh@978 -- # wait 4166360 00:33:29.291 00:33:29.291 real 0m5.271s 00:33:29.291 user 0m10.039s 00:33:29.291 sys 0m1.698s 00:33:29.291 21:14:20 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.291 21:14:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:29.291 ************************************ 00:33:29.291 END TEST keyring_linux 00:33:29.291 ************************************ 00:33:29.291 21:14:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:29.291 21:14:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:29.291 21:14:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:29.291 21:14:20 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:29.291 21:14:20 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:29.291 21:14:20 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:29.291 21:14:20 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:29.291 21:14:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.291 21:14:20 -- common/autotest_common.sh@10 -- # set +x 00:33:29.291 21:14:20 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:29.291 21:14:20 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:29.291 21:14:20 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:29.550 21:14:20 -- common/autotest_common.sh@10 -- # set +x 00:33:31.454 INFO: APP EXITING 00:33:31.454 INFO: killing all VMs 00:33:31.454 INFO: killing vhost app 00:33:31.454 INFO: EXIT DONE 00:33:32.833 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:32.833 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:32.833 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:32.833 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:32.833 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:32.833 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:32.833 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:32.833 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:32.833 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:32.833 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:32.833 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:32.833 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:32.833 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:32.833 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:32.833 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:32.833 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:32.833 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:34.236 Cleaning 00:33:34.236 Removing: /var/run/dpdk/spdk0/config 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:34.236 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:34.236 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:34.236 Removing: /var/run/dpdk/spdk1/config 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:34.236 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:34.236 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:34.236 Removing: /var/run/dpdk/spdk2/config 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:34.236 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:34.236 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:34.236 Removing: /var/run/dpdk/spdk3/config 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:34.236 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:34.236 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:34.236 Removing: /var/run/dpdk/spdk4/config 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:34.236 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:34.236 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:34.236 Removing: /dev/shm/bdev_svc_trace.1 00:33:34.236 Removing: /dev/shm/nvmf_trace.0 00:33:34.236 Removing: /dev/shm/spdk_tgt_trace.pid3843041 00:33:34.236 Removing: /var/run/dpdk/spdk0 00:33:34.236 Removing: /var/run/dpdk/spdk1 00:33:34.236 Removing: /var/run/dpdk/spdk2 00:33:34.236 Removing: /var/run/dpdk/spdk3 00:33:34.236 Removing: /var/run/dpdk/spdk4 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3841354 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3842099 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3843041 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3843486 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3844235 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3844337 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3845547 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3845667 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3845926 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3847241 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3848164 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3848365 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3848565 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3848894 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3849092 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3849251 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3849404 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3849603 00:33:34.236 Removing: /var/run/dpdk/spdk_pid3850040 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3852532 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3852702 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3852862 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3852870 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3853296 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3853309 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3853736 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3853762 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854034 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854039 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854294 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854342 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854731 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3854993 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3855197 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3857307 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3859954 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3867080 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3867482 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3870004 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3870164 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3872810 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3876648 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3879363 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3885783 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3891035 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3892341 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3893005 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3903269 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3905555 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3933543 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3936755 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3940707 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3945003 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3945079 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3945657 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3946320 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3946926 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3947300 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3947382 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3947520 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3947653 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3947657 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3948322 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3949059 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3949630 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3950533 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3950656 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3950797 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3951806 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3952535 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3957753 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3987017 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3989945 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3991119 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3992438 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3992503 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3992615 00:33:34.237 Removing: /var/run/dpdk/spdk_pid3992750 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3993314 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3994630 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3995403 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3995812 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3997420 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3997950 00:33:34.496 Removing: /var/run/dpdk/spdk_pid3998519 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4001429 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4004827 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4004828 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4004829 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4006939 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4011788 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4014560 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4018453 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4019399 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4020374 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4021457 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4024226 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4026772 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4029062 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4033294 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4033297 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4036194 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4036335 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4036590 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4036852 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4036867 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4039896 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4040613 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4043276 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4045256 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4048683 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4052007 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4058504 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4062979 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4062981 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4076362 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4076805 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4077290 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4077705 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4078282 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4078757 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4079222 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4079628 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4082133 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4082281 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4086082 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4086254 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4089512 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4092113 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4099055 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4099457 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4101962 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4102221 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4104740 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4108553 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4111221 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4117595 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4122797 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4123981 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4124658 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4134840 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4137094 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4139095 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4144758 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4144771 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4147668 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4149074 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4150475 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4151334 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4152742 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4153615 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4158962 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4159313 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4159704 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4161264 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4161546 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4161944 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4164394 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4164401 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4165915 00:33:34.496 Removing: /var/run/dpdk/spdk_pid4166360 00:33:34.497 Removing: /var/run/dpdk/spdk_pid4166367 00:33:34.497 Clean 00:33:34.497 21:14:25 -- common/autotest_common.sh@1453 -- # return 0 00:33:34.497 21:14:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:34.497 21:14:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.497 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:33:34.755 21:14:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:34.755 21:14:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.755 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:33:34.755 21:14:25 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:34.755 21:14:25 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:34.755 21:14:25 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:34.755 21:14:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:34.755 21:14:25 -- spdk/autotest.sh@398 -- # hostname 00:33:34.755 21:14:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:34.755 geninfo: WARNING: invalid characters removed from testname! 00:34:06.848 21:14:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:10.140 21:15:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:13.436 21:15:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:16.734 21:15:06 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:19.274 21:15:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:22.569 21:15:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.864 21:15:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:25.864 21:15:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:25.864 21:15:16 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:25.864 21:15:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:25.864 21:15:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:25.864 21:15:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:25.864 + [[ -n 3771476 ]] 00:34:25.864 + sudo kill 3771476 00:34:25.874 [Pipeline] } 00:34:25.888 [Pipeline] // stage 00:34:25.893 [Pipeline] } 00:34:25.906 [Pipeline] // timeout 00:34:25.911 [Pipeline] } 00:34:25.927 [Pipeline] // catchError 00:34:25.932 [Pipeline] } 00:34:25.948 [Pipeline] // wrap 00:34:25.953 [Pipeline] } 00:34:25.962 [Pipeline] // catchError 00:34:25.969 [Pipeline] stage 00:34:25.971 [Pipeline] { (Epilogue) 00:34:25.984 [Pipeline] catchError 00:34:25.986 [Pipeline] { 00:34:25.999 [Pipeline] echo 00:34:26.000 Cleanup processes 00:34:26.006 [Pipeline] sh 00:34:26.292 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:26.292 4177740 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:26.306 [Pipeline] sh 00:34:26.594 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:26.594 ++ grep -v 'sudo pgrep' 00:34:26.594 ++ awk '{print $1}' 00:34:26.594 + sudo kill -9 00:34:26.594 + true 00:34:26.606 [Pipeline] sh 00:34:26.890 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:36.931 [Pipeline] sh 00:34:37.218 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:37.218 Artifacts sizes are good 00:34:37.234 [Pipeline] archiveArtifacts 00:34:37.245 Archiving artifacts 00:34:37.388 [Pipeline] sh 00:34:37.674 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:37.689 [Pipeline] cleanWs 00:34:37.699 [WS-CLEANUP] Deleting project workspace... 00:34:37.699 [WS-CLEANUP] Deferred wipeout is used... 00:34:37.706 [WS-CLEANUP] done 00:34:37.708 [Pipeline] } 00:34:37.725 [Pipeline] // catchError 00:34:37.737 [Pipeline] sh 00:34:38.019 + logger -p user.info -t JENKINS-CI 00:34:38.027 [Pipeline] } 00:34:38.042 [Pipeline] // stage 00:34:38.047 [Pipeline] } 00:34:38.062 [Pipeline] // node 00:34:38.067 [Pipeline] End of Pipeline 00:34:38.107 Finished: SUCCESS